Encourages employers to sign up for manifesto promoting transparency in hiring
In an effort to eliminate prejudice from the candidate-interview process, a remote jobs web site is looking to make the procedure fairer.
San Francisco-based job finding and recruitment network Torre is looking for employers to sign its Frank Artificial Intelligence in Recruiting (FAIR) manifesto which includes five key components: disclose when you’re using AI; make the factors transparent; disclose rankings to candidates; detect bias; and reduce discrimination systematically.
The initiative was launched after the site found anomalies in its own work when employing AI as a recruitment tool.
“Something we started to notice during research and development is that there is a huge risk for significant bias when using artificial intelligence to expedite the process of finding the best matching candidates for professional opportunities,” says Andrés Cajiao, cofounder and chief growth officer at Torre.
“We thought about the importance of folks trying to put into writing some best practices, while we’ve tried to get some of our colleagues and some of our friends also developing technology in the recruiting industry, onboard.”
It was a famous retailer’s recent experience that prompted Torre in part, to create the FAIR initiative.
“Amazon was developing AI for automating the process of screening and ranking candidates and they ended up stopping altogether because they started to notice that their own AI was being biased towards certain candidates — white males. So this is something that is a very dangerous precedent.”
Amazon recently announced the plan to hire 3,500 workers for positions in Toronto and Vancouver.
But why does AI have an inherent bias? Because it is created by humans, so it’s bound to reproduce biased behaviour, he says.
“It involves algorithms that are built by humans, and they have the process of replicating humans. Because there was bias naturally in the process that [people] were doing — they were not aware, of course — what ended up happening is that machine learning learned that, out of these candidates, ‘These are the factors that make a candidate successful in the hiring process that is performed manually by a human being, these are the factors I should repeat that I did for success.’ And that caused Amazon’s AI to start selecting or giving more priority to white males.”
The recruiter is hoping to “bring more transparency to the process of recruiting our candidates,” says Cajiao by disclosing why the candidate may have been rejected.
“We disclose when we’re using AI, or the factors that we consider to rank a candidate, or to determine who is the best matching candidate, or the rankings of those candidates, and then it’s going to be way easier to detect when something goes wrong for the company and the candidate.”
“Companies for a very long time — some for legal reasons, some [because of] employer branding — they have not exposed [the use of AI] to candidates but we all agree that it’s healthy for everyone to see that information, to have that information. Then we may be able to detect those biases, reduce that discrimination systematically and provide candidates the feedback that should help them improve their process of finding a job or improve their own professional experience and profile,” he says.
The company is hoping that other recruiters join this effort, says Cajiao.
“It’s a call for action that we want developers of artificial intelligence in recruiting to join us based on the manifesto but we want to make a call for others that may be interested in engaging with us in order to develop these areas. We believe it’s important that we partner with other companies developing AI to follow these principles that tried to make artificial intelligence involved in being more frank and more fair.”
AI can be an indispensable tool for recruitment, says one Canadian expert.