Despite AI hiring tools’ best efforts to streamline hiring processes for a growing pool of applicants, the technology meant to open doors for a wider array of prospective employees may actually be perpetuating decades-long patterns of discrimination.
“You kind of just get this positive feedback loop of, we’re training biased models on more and more biased data,” Kyra Wilson, a doctoral student at the University of Washington Information School and the study’s lead author, told Fortune. “We don’t really know kind of where the upper limit of that is yet, of how bad it is going to get before these models just stop working altogether.”
“Workday’s AI recruiting tools do not make hiring decisions, and our customers maintain full control and human oversight of their hiring process,” the company said. “Our AI capabilities look only at the qualifications listed in a candidate’s job application and compare them with the qualifications the employer has identified as needed for the job. They are not trained to use—or even identify—protected characteristics like race, age, or disability.”
“We understand the importance of responsible AI use, and follow robust guidelines and review processes to ensure we build AI integrations thoughtfully and fairly,” a spokesperson told Fortune in a statement.
“If you don’t have information assurance around the data that you’re training the AI on, and you’re not checking to make sure that the AI doesn’t go off the rails and start hallucinating, doing weird things along the way, you’re going to you’re going to get weird stuff going on,” she told Fortune. “It’s just the nature of the beast.”
The rapid scaling of AI in the workplace can fan these flames of discrimination, according to Victor Schwartz, associate director of technical product management of remote work job search platform Bold.
“It’s a lot easier to build a fair AI system and then scale it to the equivalent work of 1,000 HR people, than it is to train 1,000 HR people to be fair,” Schwartz told Fortune. “Then again, it’s a lot easier to make it very discriminatory, than it is to train 1,000 people to be discriminatory.”
“You’re flattening the natural curve that you would get just across a large number of people,” he added. “So there’s an opportunity there. There’s also a risk.”
While employees are protected from workplace discrimination through the Equal Employment Opportunity Commission and Title VII of the Civil Rights Act of 1964, “there aren’t really any formal regulations about employment discrimination in AI,” said law professor Kim.
Existing law prohibits against both intentional and disparate impact discrimination, which refers to discrimination that occurs as a result of a neutral appearing policy, even if it’s not intended.
“If an employer builds an AI tool and has no intent to discriminate, but it turns out that overwhelmingly the applicants that are screened out of the pool are over the age of 40, that would be something that has a disparate impact on older workers,” Kim said.
“What it means is agencies like the EEOC will not be pursuing or trying to pursue cases that would involve disparate impact, or trying to understand how these technologies might be having a discrete impact,” Kim said. “They are really pulling back from that effort to understand and to try to educate employers about these risks.”
The White House did not immediately respond to Fortune’s request for comment.
Melanie Ronen, an employment lawyer and partner at Stradley Ronon Stevens & Young, LLP, told Fortune other state and local laws have focused on increasing transparency on when AI is being used in the hiring process, “including the opportunity [for prospective employees] to opt out of the use of AI in certain circumstances.”
The firms behind AI hiring and workplace assessments, such as PDRI and Bold, have said they’ve taken it upon themselves to mitigate bias in the technology, with PDRI CEO Pulakos advocating for human raters to evaluate AI tools ahead of their implementation.
“By removing that barrier to entry with these auto-apply tools, or expert-apply tools, we’re able to kind of level the playing field a little bit,” Schwartz said.