As the world increasingly moves towards automation in all sectors, so too has recruitment seen an increase in automated tools. Tools that claim to be able to make successful hires for companies, powered by Artificial Intelligence and Machine Learning, have been progressively in higher demand. With this progression, however, also comes the danger that these tools may carry with them human biases because they were developed by humans – and thus have the potential to discriminate against candidates.
There have been well-documented studies over the years around the potential for bias in AI-driven recruitment. Amazon famously experimented with an in-house project which utilised AI in recruiting in 2018. This tool purportedly selected top candidates but was shelved after executives discovered that female candidates were being penalised. According to Reuters, the AI system had taught itself based on Amazon’s past hiring patterns that the most successful candidates were male.
With AI technology becoming more commonplace across workplaces and with the potential dangers AI hiring tools present around potentially excluding gender and racial minority candidates, it’s no surprise that legislation has started to be introduced to try and prevent this type of recruiting bias.
In November 2021, the New York City Council passed a bill meant to address this very issue. Having been introduced in early 2020 and slated to go into effect in January 2023, the bill requires AI recruitment tools to undergo “bias audits”. Specifically, the bill covers tools which automate employment decisions such as hiring or promotions. It is unclear currently whether this bill will also apply to more passive recruitment tools – for instance, LinkedIn’s ‘suggested jobs’ feature, which uses LinkedIn data to evaluate and promote vacancies to its users that the tool thinks might be appropriate.
This new bill, as well as requiring a “bias audit”, will require that any candidates or employees being evaluated by these tools are notified about their use. Candidates and employees must also be informed about the job qualifications and characteristics that are being used by the tool to determine an outcome. The bill allows applicants to request an alternative review process, if they wish, including the option of having a human review their application. Employers or employment agencies which fail the bias audits will be subject to a fine up of to $1,500 per violation.
The “bias audit” in NYC’s new bill is specifically defined as an evaluation over whether the tool being audited has a direct, negative impact on applicants based on their sex, race, or ethnicity. The bill does not require the audit to cover discrimination relating to other factors, such as disability or age.
From vendors of AI-based tools, there has been very little opposition to the bill. It may be, however, that this is because the mandatory bias audits are due to be administered by the vendors themselves, presenting a far reduced likelihood that said tools might fail the audit. Vendors are also not held liable for fines – these would fall upon the employer or agency using the tool.
So, will the bill make any difference?
Maybe still yes, although time will only tell its true impact. Certainly, if vendors of AI recruiting tools haven’t already put a huge focus on reducing bias in their technology, the legal requirement from this bill to do so will inevitably have a ripple effect far beyond hiring in New York City.
This is not the first piece of legislation around AI hiring in the US. In 2019, Illinois passed legislation around transparency, consent and data usage in interviews which use AI technology, for instance. In 2020, Maryland enacted a law that would require notice and consent prior to using facial recognition technology during job interviews, as another example. Recently, a bill was introduced in Washington DC around discrimination in AI-based recruiting more generally. The NYC bill is, thus far, the most expansive piece of legislation, but as the usage of AI technology grows, it is expected that similar legislation will appear across other states.
There has also been talk at federal level of introducing change. In October 2021, the Equal Employment Opportunity Commission announced the launch of an initiative intended to ensure that AI technology in hiring is “used fairly, consistent with federal equal opportunity laws.”
NYC have allowed vendors of AI recruiting tools a year to review their tools and do all they can to reduce bias and the potential for discrimination. Between this bill and the likelihood that more are to follow across the US, it seems inevitable that vendors will need to take AI bias seriously, if they do not already. With that, we can only hope that this will ultimately usher in widespread change across the sector and move the technology as far as possible away from Amazon’s infamous 2018 AI recruitment failure.