While AI hiring tools aren’t often inherently biased, that can quickly change once data sources are introduced.
Company data on its top-performing employees isn’t inherently bias-free. If a company relies on that data for AI screening and the company’s best and brightest aren’t diverse, the AI won’t be inclusive.
One prime example of AI perpetuating bias involved a recruitment tool used by an e-commerce giant. As the AI screened technical job candidate resumes, it penalized applicants who used terminology indicating they were female. Along with the word “women,” the names of women’s colleges and similar items were viewed unfavorably by the AI.
The issue was that the AI was trained to identify patterns in resumes the e-commerce company received previously. Since most of the company’s technology candidates were male, the AI began favoring men.
Emotion recognition technology (ERT) – an AI-based approach to gauge an interviewee’s comfort level, confidence, and other emotional states – has also shown issues with bias. Emotion identification errors were far more likely when the technology was used during interviews with minorities, resulting in unfair hiring decisions.
For example, according to a study, some ERT technologies perceived black faces as angrier than white faces. As a result, reports generated by the ERT unfairly penalized black participants, which could lead to discriminatory hiring decisions.