Employers’ DEI strategies can be derailed by unethical AI practices

artificial intelligence

We know artificial intelligence has the potential to make workplaces more efficient — but is it necessarily making them better?

More recruiting teams will find AI-equipped hiring solutions essential in 2022, according to a recent trend report by software company ModernHire. But without the proper attention to detail, too much tech can threaten the diversity initiatives of a company.

“The output of an AI algorithm vastly relies on the quality of the data and on rail guards that ensure the outcome is objective, especially with talent acquisition,” says Sanjoe Jose, CEO of AI-powered talent measurement platform Talview. “To effectively use AI there are a lot of parameters, which [companies] need to continuously monitor.”

Read More: 5 ways artificial intelligence will change the way we work in 2022

A 2018 Gartner report predicted that through 2030, 85% of AI projects will provide false results caused by bias that has been built into the data or the algorithms, or that is present in the teams managing those deployments. This can result in a number of consequences, including AI tools that cater to “white sounding” names on resumes and image recognition software that favors men over women.

“One of the challenges we’re seeing in the industry today is that there’s new technology, and new applications,” Jose says. “So there is very little understanding of how these processes can be effectively managed and used.”

Companies have already seen the consequences of this. In 2018, Amazon faced backlash for their introduction of an AI recruiting system meant to streamline their process. But the algorithm had been programmed to replicate existing hiring practices, unintentionally embracing existing biases and rejecting resumes that included the word “women.”

Read More: Using data to improve the impact of your DEI programs

As the pandemic continues to accelerate the integration of tech in HR, it’s important for employers and companies to be strategic about their approach, according to Jose. He suggests running regular rounds of equity studies on machine learning platforms, which means manually checking the machine’s sorting methods to see what biases it may or may not be perpetuating and then correcting them. Before deploying an updated AI, teams should always run a test again in isolation in order to make sure prior biases are clear and to prevent it from picking up new ones.

“It’s called a glass box approach to the AI dynamic,” Jose says. “It’s where you have a model which is running and the model is also continuously learning. This process ensures that there is a very safe guard rail ensuring that you're using AI in the most ethical manner.”

In the end, AI is critical to progress, Sanjoe says. And companies shouldn’t shy away from using it as a recruiting tool. Technology — when used ethically — has the potential to level the field in a way humans never will.

“Bias is inherent in humans,” Jose says. “But the ecosystem has changed and with platforms like AI we can [continue] to make changes faster.”

For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Diversity and equality
MORE FROM EMPLOYEE BENEFIT NEWS