Published on:

A.I. does not remove discriminatory bias from the hiring process

The news organization Reuters recently reported on an Amazon HR project to develop artificial intelligence to screen job applicants’ resumes. Amazon wanted a computer to use an algorithm to select the top five applicants from a pool of hundreds. What it found, however, is that the algorithm disproportionately screened out well-qualified women.

The people who worked on this project for Amazon developed an algorithm that looked for certain words which appeared in the resumes of employees who Amazon had hired and who, presumably, proved to be good employees. The problem is that, historically, Amazon and other tech companies have disproportionately hired men. And, what Amazon learned through this project, is that men and women use different terminology in their resumes. For instance, men were more likely to use terms like “executed” and “captured” in their resumes. Needless to say, whether you use a term like “executed” in your resume is not a good predictor of whether you’re going to be a good employee.

Regardless of Amazon’s failed experiment, many companies are forging ahead with A.I. hiring processes. For example, one firm has developed software that will analyze candidates’ facial expressions and speech during video recorded interviews.

Companies could certainly devise effective ways to use A.I. to help them select the best applicants for open positions. Employers have used similar approaches for decades. For example, many employers utilize written tests to screen applicants. It is a good idea to try to take subjective decisionmaking out of the hiring process because biases infect subjective decisionmaking. The problem with tools like A.I. and written tests, however, is that if the tools do not actually measure characteristics that are important to job performance, they are not helpful. And discriminatory bias can infect the process of determining what characteristics are important to job performance, particularly if that process involves subjective judgments.

Amazon claims that it never used the failed A.I. applicant screening tool to make hiring decisions. If it had used the tool, it sounds as though the tool would have had a disparate impact on female applicants. If it did, Amazon could be held liable under Title VII of the Civil Rights Act since it appears as though the A.I. applicant screening tool did not accurately select the best applicants. Companies currently trying to develop A.I. applicant screening tools should be mindful of this potential for liability.

Contact Information