Companies are increasingly relying on automation to help screen candidates in the hiring process, a trend prompting scrutiny from local governments and regulators.
Nearly one in four organizations already use automation or artificial intelligence (AI) to support hiring, according to a February 2022 survey from the Society for Human Resource Management, and usage is higher—42 percent—among large employers with 5,000 or more employees. A recent report from Recode detailed Amazon (AMZN)’s ambitions to replace some of its recruiters with AI software that can fast-track candidates to interviews without any human involvement.
Today AI technology can do more than just screen resumes. Companies may also use AI tools to monitor candidates’ social media presence quickly and pick up on red flags. Candidates might complete a first interview for a job without ever speaking to a real person at the company, thanks to AI-powered video software.
“It’s sort of an arms race,” said Josh Bersin, who runs a professional development academy for human resources and learning professionals. “If you don’t have this kind of technology, your recruiters are spending a lot of extra time, and they’re probably missing people.”
But hiring-focused AI is mired with ethical quandaries, as some researchers argue the technology is biased against certain types of candidates. Though the tools have been around for about a decade, governments are finally starting to scrutinize them, prompting further debate about where employers should draw the line when using automation in the workplace. Starting next month, employers in New York City could face penalties if they don’t audit their AI hiring tools for bias.
How employers use AI in hiring
Though software designed to detect certain keywords on a candidate’s resume has been around for decades, automation tools for recruiters have become much more sophisticated in the last seven to eight years, according to Bersin.
Whereas older tracking software used to simply identify promising candidates based on certain words on their resume, today’s technology can dig into a candidate’s social network, skills, and past employers they share with successful employees currently at a firm, to determine if they might be a good fit.
“Now recruiters look at people that are almost pre-qualified before they even apply,” said Bersin, noting AI software can source candidates from social media even if they haven’t yet applied for a position. He attributed the continued tightness of the U.S. labor market in part to the fact it’s become much easier to find a job with this technology.
Steve Boese, the co-founder and president of consulting firm H3 HR Advisors, said the use of AI in recruiting has evolved from simple functions like automating interview scheduling or asking screening questions to more sophisticated uses, such as assessing a candidate’s profile and interpreting their skills. AI might be used to evaluate candidates’ interview responses as well, as is the case with software that ranks applicants based on their performance in automated video interviews.
Though these tools have the potential to make the hiring process much more efficient, in recent years companies including Amazon have come under fire for using technology that critics say replicates biases in hiring. An AI recruiting tool that Amazon stopped using in 2017 was said to favor male candidates over women, penalizing resumes that included phrases such as “women’s chess club captain,” and downgrading graduates of several all-women’s colleges, Reuters reported.
It’s “entirely feasible” for an algorithm to make hiring decisions based off the same biases of humans, said Dipayan Ghosh, a Harvard Kennedy School professor who researches AI, in a 2018 essay for Quartz. Ghosh cited results from a 2003 field experiment published in the National Bureau of Economic Research that found resumes with “white sounding” names were prioritized over resumes with “Black sounding” names, even when candidates have similar qualifications. AI tools are particularly likely to replicate such biases “if the policy leads to profitable results for the employing client despite its implicit bias,” he added.
Employers face government scrutiny for AI use
Governments are just now starting to scrutinize employers for their use of automated technologies to hire workers. In May the U.S. Department of Justice and the Equal Employment Opportunity Commission issued statements alerting Americans to the potential for AI tools to discriminate against job applicants with disabilities. The White House in October released a blueprint for an “AI Bill of Rights” in October to “help guide the design, use, and deployment of automated systems,” including in hiring.
A new law set to take effect in New York City this January will likely prompt more employers to take a closer look at how their AI software works. The legislation prohibits employers from using automated employment decision technology unless it has been subject to a “bias audit.” Similar legislation has been proposed in Washington, D.C., and California. Illinois also has a law on the books regulating AI video software.
New York City’s law has already prompted confusion among employers who say they don’t know how to conduct an independent audit of their software that complies with the mandate. And experts who study the use of AI in hiring say confusion is likely to continue as more governments seek to regulate it.
Ben Dattner, a consultant who has written about the legal and ethical implications of AI in hiring, said there’s still quite a bit of ambiguity as what constitutes “artificial intelligence” in the HR world, which could make laws like New York City’s hard to enforce. He added it’s typically easy for companies to plausibly deny their knowledge of biases in their hiring tools. “The law is behind the technology, to some extent,” Dattner said.
While Bersin said new laws are likely to raise the bar for employers and AI vendors to more carefully account for bias in their hiring tools, he echoed that enforcing such legislation may be tricky. “If people get sued, and they have to figure out why the AI made a particular decision, that’s going to be chaos.”