What Biden's AI executive order means for employees and employers

biden
Al Drago/Bloomberg

President Biden introduced an executive order on Monday that aims to manage artificial intelligence risk across several different metrics, including setting new standards for AI safety and security around privacy, equity and civil rights. 

Industry leaders across every sector have been looking for more decisive regulation when it comes to the rise and implementation of AI, and the federal government's response is long awaited. 

"We've been advocating for these types of orders," says Sultan Saidov, co-founder and president of Beamery, an AI-powered talent management solution. "I was positively surprised by how deep and broad the order went, speaking to not only the usual narrative around AI, but one that also captured supporting workers." 

Read more: A step-by-step guide to implementing AI at work

Under the executive order, the President called on employers to mitigate the harm and maximize the benefit of AI for workers by addressing job displacement, labor standards, workplace equity, health and safety and data collection. The order also urges the regular production of a report on AI's potential labor-market impacts, to study and identify options for strengthening federal support for workers facing labor disruptions.

"When we look at previous technology waves, there were tons of dislocated people during each major industrial revolution, and we had to have lots of new regulations for workers' rights, which often took decades, " Saidov says. "This order is a phenomenal attempt to get ahead of that, rather than do it reactively once people do start losing their roles." 

As for how this will affect companies at large, developers of the most powerful AI systems. like ChatGPT and Google's Bard, will be required to share their safety test results and other critical information with the U.S. government. The National Institute of Standards and Technology will set rigorous standards for extensive testing of new AI systems to ensure safety before public release. and the administration will order the development of a National Security Memorandum that directs further actions on AI and security. 

Larger companies at the forefront of AI development such as Google, Meta and Microsoft shouldn't see any pressing change, according to Saidov, seeing as they should have already been accounting for this kind of oversight in previous regulation attempts. But for small and medium-sized companies, it urges them to start thinking about the future. 

Read more: Are your company's AI decisions good for employees? 68% of C-suite leaders aren't sure

"All organizations using AI will have to start thinking about common best practices and principles," he says. "And while the order doesn't necessarily make this happen overnight, it does foretell of both emerging standards and regulations being essentially placed on the horizon."

And while there is a lot of potential good that could come off the back of this kind of federal guidance, Saidov emphasizes that this is just the beginning in what should be a much longer and more detailed conversation in the future.  

"The government's involvement, in the grand scheme of things, is great to see," he says. "But I do think that there are details that are not yet fleshed out that will make this all much more practical. Right now, this is just about setting the foundation rather than being precise in what is expected in the long term."

For reprint and licensing requests for this article, click here.
Politics and policy Industry News Artificial intelligence Technology Election 2024
MORE FROM EMPLOYEE BENEFIT NEWS