Why employers shouldn't fear NYC's new AI law

Deemerwha studio from AdobeStock

AI is taking the professional world by storm as the technology becomes more entrenched in hiring and talent management. But with great power also comes great scrutiny — and employers must prepare to embrace new regulations governing the use of AI in recruitment. 

According to career-seeking tool Zippia, 65% of recruiters use AI to narrow down their candidates, and 67% say AI has improved the recruitment process. But like humans, AI can come with biases built into their systems. New York City's new AI law, otherwise called Local Law 144, wants to hold employers accountable for preventing those biases. Local Law 144, which went into effect last month, requires employers to audit their automated employment decision tools, or AEDTs, each year for any possible biases they bring to the hiring process. 

While this may be a cause for concern for many employers since they have to add another compliance hurdle to the agenda, it could also be incredibly beneficial down the line, says Jonathan Kestenbaum, managing director of tech strategy at AMS, a company that helps employers navigate tech and talent acquisition. 

Read more: Why no one is reading your company-wide emails — and how AI can help

"AI shouldn't be a get-out-of-jail-free card," says Kestenbaum. "It's a good thing that the government is asking employers to make sure these tools are being used ethically."

For Kestenbaum, Local Law 144 is allowing New York City employers to get ahead of possible future state and federal regulations placed on AI. Not to mention, the audits help prove that employers are compliant with federal anti-discrimination laws enforced by the Equal Employment Opportunity Commission (EEOC). This national agency protects job applicants and employees.

"I would argue that a number of organizations are violating the existing federal laws protected by the EEOC," says Kestenbaum. "Just like you can't fire someone based on their race, religion, sex, national origin, age or disability, you equally can't hire someone based on those criteria as well."

Read more: After affirmative action ruling, what role can AI play in hiring?

Kestenbaum points out that the audit itself is straightforward, with employers required to use a third party like AMS to test their tools for glaring biases. The audit looks at the selection rates for each of the protected categories under the EEOC, namely race, ethnicity and gender, and compares it to the category with the highest selection rate, explains Kestenbaum. For example, the test could compare the rate at which Black women are picked versus white men. Then the company publishes the results on the employment page of their website. If a candidate feels the tool could be discriminatory towards them, then they can ask the employers for an alternative assessment process. 

While some employers may feel this process isn't worth keeping their automated hiring tools, Kestenbaum believes abandoning the technology would do more harm than good.

"Moving forward, I don't think organizations are going to be able to remain competitive if they don't leverage AI," he says. "We just have to figure out how to help organizations understand where AI is being used, what efficiencies are gained from that technology, and then how to leverage these technologies compliantly within the hiring process."

Read more: Using AI to recruit? You're legally responsible for the bot's bias, EEOC says

Predicting more AI regulation will follow Local Law 144, Kestenbaum believes it would be better for employers, even outside of New York City, to begin auditing their technology. An AI tool audited and revised to minimize biases can make a great impact on companies' recruitment process and diversity goals, he underlines. 

"Inherently, all humans are biased," Kestenbaum says. "So if we could use AI in a way that dismisses information, like where that candidate went to school or their name, then it could be a really positive thing."

Kestenbaum advises employers to partner with a third-party tech company that will not only audit their AI tools and optimize the technology for compliance, but help hiring teams effectively use it. He notes that two companies can have the same tech and still produce vastly different results because no one knows how to use the tool. As impressive as AI is, it still needs to be managed by informed employees, says Kestenbaum.

Read more: Should your business be afraid of AI?

As the AI tech boom shows no signs of slowing down, Kestenbaum encourages anyone leveraging AI to use it responsibly, rather than wait for the next regulation to hit their city or state.

"This New York City Law is just the beginning of what we'll see," he says. "But regardless, under the EEOC it doesn't change the fact you can't be biased in your hiring."

For reprint and licensing requests for this article, click here.
Technology Regulation and compliance
MORE FROM EMPLOYEE BENEFIT NEWS