New York City looks to regulate AI hiring

City to mandate bias audits, allow candidates to choose in-person recruitment

New York City looks to regulate AI hiring

“Having more transparency and rigour is essential in something as important as hiring.”

So says Abhishek Gupta, founder and principal researcher at the Montreal AI Ethics Institute, in response to New York City’s new rules to combat the potential for bias in recruitment using artificial intelligence (AI).

Expected to take effect in January 2023, the legislation would require that a bias audit be conducted on an “automated employment decision tool” prior to its use. This tool means any computational process, derived from machine learning, statistical modelling, data analytics or artificial intelligence “that issues simplified output, including a score, classification or recommendation, that is used to substantially assist or replace discretionary decision-making for making employment decisions that impact natural persons.”

In addition, any employer or employment agency using the tool to screen candidates must notify the candidates that the tool is being used — and how it’s being used — and give people “an alternative selection process or accommodation.”

Violations of the provisions of the bill would be subject to a civil penalty starting from US$500.

Despite the many benefits of the AI approach, companies such as Amazon have learned the hard way that the data that comes out is only as good as the data that goes in.

Transparency and rigour

Having transparency rigour, especially for high-stakes situations that have significant impact on people's lives, makes sense, says Gupta.

And the pandemic has led to greater churn, putting greater pressure on hiring resources, he says.

“People are short-staffed to process those applications, [so] it's only natural to turn to tools that can help to automate the process to a certain degree. And it does have significant implications in that entire hiring process.”

New York City’s legislation is a good development, says Julia Stoyanovich, associate professor in the department of computer science and engineering at the Tandon School of Engineering and the Center for Data Science at New York University.

“When we make decisions that are critical for people's lives and livelihoods, and hiring is one such domain… and yet, we have no concrete information about what systems are being used, how their performance is measured, and really what their impacts are, I think that any attempt to shed light on what's going on in the use of AI in hiring certainly is a good one.”

However, there is a certain degree of path determinism that happens when you head down a particular policymaking path, so it’s important to be cognizant of what assumptions are being made with new AI rules, says Gupta.

“Without that, you end up either under-regulating over-regulating or wrongly regulating things that don't really matter.”

AI can be an indispensable tool for recruitment, says one Canadian expert.

Getting audits right

It’s always better to have something compared to nothing but who is conducting the audit, and their financial motives, can make a big difference, says Gupta.

“Unless you have an external regulatory body that controls or mandates certain standards and requirements, on the part of the organization conducting the audit, I would say they're just as much market players as anybody else in terms of achieving their business goals, which is to drum up as much interest in as much business as possible.”

Second, the audits also depend on the degree to which they have access to the systems, he says.

“If you're doing what you can call a blackbox audit [and you] don't have access to the internal functioning of the system, and you're just looking at inputs and outputs and trying to draw insight from that, the results are limited in terms of what it proves.”

Several people and organizations have voiced trepidations about the way that the bias auditing component of the bill shapes up, says Stoyanovich, who also has concerns.

In particular, the only protected groups accounted for are those based on gender and race, she says.

“This is very insufficient. In the context of hiring, in particular, we worry very much about age discrimination, about discrimination based on disability status as well as about intersectional discrimination — the kind of disadvantage that people experience who are, for example, both female and black, or both elderly and disabled. So this is not something that the bill even attempts to tackle.”

“When you produce a machine and the vendor tells you that the machine is not going to be using an individual gender or race or disability status, but instead it’s using some other signal in the data that reveals this membership in protected and disadvantaged groups, you're not going to have any way to find out that there is actually bias in the decisions that are taking place,” says Stoyanovich.

The legislation is also not specific about how the audits will be conducted, by whom, and what the criteria will be, she says.

“I think that a lot of work will need to happen here to really make sure that these bias audits are useful, rather than being a rubber stamp that somehow encourages companies to say, ‘Oh, we've been audited and now you're free to use our software.’”

A new project out of Ontario is looking to find out which AI filters could be most damaging when it comes to people with disabilities.

Humans versus machines

But the disclosure component of the bill, allowing people to choose an alternative approach, is particularly exciting, says Stoyanovich.

“Of course, it’s going to be up to us, all collectively, to figure out how to really operationalize this, how to make the kinds of descriptions that are given to people are meaningful and actionable and useful. But I think that is absolutely a very useful first step.”

While the option of picking non-AI recruitment sounds promising, it’s not necessarily true that humans are better, says Gupta.

“Using AI in hiring processes is just a quantified reflection of existing hiring practices… If the AI system is biased and it was trained on hiring decisions and data from that firm, that is a direct reflection of how the hiring managers at that organization behaved in the past, so the distinction between a human and a machine is actually not that significant,” he says.

“I would say that we should strive to fix the more systemic issue, which is to root out bias, both at the human and the machine level, if we want to see meaningful difference.”

Employers, and HR, should be careful of not removing the human too quickly from the process, says one expert: “There always has to be that safety net of making sure the AI is performing what they need it to… finding the right candidates is a bit of an art and science.”

Is bias-free hiring possible?

While AI companies may argue that these machines free up bias, Stoyanovch doesn’t buy that argument.

“Humans can be biased, implicitly or explicitly, but this is on a small scale — every individual, every particular hiring manager, if they are biased, they are going to be impacting the people they screen,” she says.

But AI screening is being done on a very large scale. Companies such as Delta, for example, are using the tools to go through thousands of people every year, says Stoyanovich.

“Everybody's going to be subjected to the same non-random and potentially biased screening processes. And so I think that, really, this promise of machines solving the issues of discrimination in the in the workplace, and in hiring in particular, that's just not realistic.”

Latest stories