Using AI to recruit? You're legally responsible for the bot's bias, EEOC says

artificial intelligence recruiting
Ekaterina Bolovtsova from Pexels

Artificial intelligence is a great tool for employers looking to streamline recruiting and hiring processes. But regulatory powers are reminding employers that it's not the AI that will be held accountable for bias and discrimination. 

The Equal Employment Opportunity Commission (EEOC) recently released technical guidance warning employers of the threat algorithmic decision-making tools pose to equitable hiring strategies, by potentially violating existing civil rights laws including Title VII and the Americans with Disabilities Act (ADA).

"What the EEOC is trying to make clear is that the anti-discrimination laws apply to adverse employment decisions whether it's made by a human being or a computer," says David Barron, a labor and employment attorney with law firm Cozen O'Connor. "It's not a defense to say that a human being didn't make the decision — the law applies broadly."

Read more: Should your business be afraid of AI?

Currently, 65% of recruiters use AI tools,  and 67% say AI has improved the hiring process by streamlining candidate searches, according to workplace insights platform Zippia. Seventy-nine percent of recruiters even believe AI will be advanced enough to make hiring and firing decisions on its own in the near future — but that's where they're seeing the most pushback. According to the Pew Research Center, 71% of employees oppose the use of AI to make final hiring decisions. 

Their skepticism isn't necessarily misplaced, according to Barron. There are many examples in which AI has introduced bias into recruitment efforts by filtering out applicants of certain genders and ethnicities after an employee failed to keep their own biases out of the set-up process. There are even concerns that AI could discriminate against those with speech impediments in video interviews and cause them to score lower than applicants without, which would directly violate ADA guidelines. 

"If there's some sort of AI system that is set up to either take over or assist with any of these hiring functions, the employer has a duty to make sure that that entire system can accommodate someone with a disability," Barron says. "It's hard to design tools that are flexible enough to satisfy that." 

When employers are exploring options for new AI tools, they can take steps as simple as asking vendors the right questions about their product, and taking precautions before onboarding a new workplace program.

Read more: How employers can prepare for New York City's AI law

"Someone's going to try to sell you some tool that will, on one hand, make your processes more efficient — but that doesn't mean that you don't have to worry about legal issues," Barron says. "If there's a resume screening tool that weeds out applicants to save the employer time, and there's something about that algorithm that weeds out persons of color or with disabilities or are gender biased in some way, that liability is still on the employer." 

Breaking ADA and Title VII rules can amount to fines up to $150,000 and $300,000, respectively, as well as orders to pay back wages, attorney fees, damages for emotional distress and punitive damages. To avoid ending up in an unintentionally grave situation, employers should be measuring their applicant diversity metrics after implementation of AI and comparing them to the same data set prior to embracing the new tool. Any time they notice a significant dip or a change, they halt the process and reevaluate.  

"Common sense can't go out the window," Barron says. "Anytime you make a change or introduce a new technology, make sure that you're testing that technology after the fact, to make sure that it hasn't made some chain change in the outcome that could be construed as discriminatory."

For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Recruiting
MORE FROM EMPLOYEE BENEFIT NEWS