over 2 years ago by Lauren Fonseca

Can AI truly defeat recruiting bias?


An age-old problem: for as long as recruiting has existed, recruiting bias has existed. Historically, in the tech industry, white and Asian men have held most of the jobs, with minority and female applicants being held back. In fact, a 2018 study from the Kapor Center, a California-based initiative that aims to make the technology space in the United States more diverse and inclusive, states that while 21% of computer science degree earners in the US are Black or Latinx, they make up only 10% of the tech workforce.

As governments, start-ups and corporations start to actively improve their hiring efforts to diversify the workforce, new tools utilising technology such as Artificial Intelligence is appearing across the marketplace to make this easier and more revolutionary, with a goal to eradicate recruiting bias moving forward.

But how effective is it?

Last year, Amazon famously experimented with an in-house project which utilised AI in recruiting. This tool purportedly selected top candidates but was shelved after executives discovered that female candidates were being penalised. According to Reuters, the AI system had taught itself based on Amazon’s past hiring patterns that the most successful candidates were male.

AI Now, in its 2018 report, in fact warned that “the gap between those who develop and profit from AI — and those most likely to suffer the consequences of its negative effects — is growing larger, not smaller,” in relation to concerns over bias and discrimination.

It seems that AI still struggles to identify context – something which is crucial in everything. For example, an AI-driven job assessment for a babysitting position showed the tool struggling to differentiate between actual bullying behaviour and a joke or movie quotes.

Self-learning can help to overcome this, but the data AI must learn from is often human driven and humans have centuries of in-baked bias to contend with. With most tech jobs being held at present by men, this often means that the technologists developing and teaching these tools are also overwhelming male – something which critics have theorised could be having a great impact on AI tools also learning recruiting bias by inadvertently deeming male applicants more suitable than female.

LinkedIn, arguably the world’s current biggest hiring tool, recently realised that its own algorithm was at risk of developing a hiring bias and as such, rolled out a change to the algorithm just last year, in an attempt to reduce this possibility.

John Jersin, VP of product, talent solutions and careers at LinkedIn, speaking at the Forbes’ CIO Summit in California, said of the company’s software, “It would be possible to create biased systems. We wanted to make sure that that bias didn’t get out of control.” He believes that LinkedIn “can use techniques to influence people to reduce bias in outcomes."

LinkedIn have started by tackling gender diversity, “because it’s the easiest one to get a significant dataset on. It’s a 50/50 split just about.”

Nicole Sanchez, founder of Vaya Consulting and also a speaker at the summit, however, warns that this may cause problems with the work force of the future: “Fifty percent of those who identify as Gen-Z living on the coasts thought about their gender and identify somewhere on a nonbinary spectrum of gender. As much as we want to drive for something simple like gender, for the next set of workers, gender is decreasingly meaningful.”

“What’s happening is folks are using LinkedIn, looking at a picture and guessing,” Sanchez said. “There are laws that say you can’t ask someone their race and gender on the way in, but the proxy we’re currently using solidifies our old broken system of looking at each other and assuming. Increasingly, I’m going to be wrong as Gen-Z comes into the workforce.”

Sanchez believes that the solution cannot be entirely down to AI or other emerging technologies to solve recruiting biases.

Sanchez notes, “As much as we want to nail down data, because we’re about data-driven solutions, our data’s wrong. Some of the work that we have to do has to be shepherded in by humans.