Artificial intelligence could be adding to the bias at your company

cybersecurity problem.jpg
Krisztian Bocsi/Bloomberg

Employers are increasingly turning to technology to ramp up their diversity, inclusion and recruiting efforts, but the wrong applications could be introducing bias instead of eliminating it.

While employers may be utilizing software created by some of the biggest names in the industry, those companies are lacking the diverse talent needed to create truly inclusive AI programs. Only 2.5% of Google’s workforce is Black, and just 4% of Facebook and Microsoft’s employees are people of color, according to a 2019 study conducted by the AI Now Institute.

The best remedy to an unreliable AI is diligent human oversight, according to Imo Udom, chief strategy and product officer at Outmatch, a recruiting platform. Companies should pair the incorporation of AI with a good checks and balances system.

Read more: 5 ways to prevent unconscious bias from ruining your company culture

“When it comes to AI and trying to actually eliminate bias versus introducing it, [the solution is] to take a holistic approach,” Udom says. “Yes, this is what the data set is showing me, but why is the data getting there? What are the attributes within that data set that drive that outcome?”

In recent months, more companies have been turning to technology to streamline their recruiting processes. Payroll, HR and benefits company Simpeo teamed up with hiring platform GoodJob on a new talent acquisition tool designed to utilize AI technology to navigate remote recruiting. AI has even been credited for ensuring the safety of older talent as employees return to workplaces without ageism creeping into the equation post-pandemic.

But without attention to detail, the very same tech employed to help may actually end up excluding a large number of people from candidacy pools, says Rishi Kumar, co-founder and CEO of lending platform Kashable. He says he treads lightly when implementing AI in his own practice.

“You can't let bias unwittingly seep into your algorithm,” Kumar says. “If you just throw an algorithm a thousand data points and you don't know how it arrived at its decision, it may have unwittingly used data points that are highly correlated with race.”

When choosing usable data points, companies should note the biases already present before feeding them to the algorithm. For example, AI should make it a point to acknowledge the racial biases surrounding certain zip codes and addresses when gathering data around a location.

Read more: 3 tech tools making work easier this year

The use of AI is still a step in the right direction of more equitable hiring practices, but it’s a double-edged sword that should be handled with care, Udom says. Executives responding to the 2021 Deloitte Global Human Capital Trends survey recognized that incorporating tech into their company shouldn’t be a choice between people and machines; rather, it can be a “both-and” partnership.

Including in-person touch points, manually reviewing results and partnering with institutions that provide credible data sets are critical steps companies should be building into their strategies to keep their systems from doing more harm than good, Udom says.

“It's not the tool itself,” he says. “It's the usage of the tool and the understanding of how to use it.”

For reprint and licensing requests for this article, click here.
HR Technology Artificial intelligence Diversity and equality Employee relations
MORE FROM EMPLOYEE BENEFIT NEWS