Legislation coming out of New York, Quebec and Ottawa suggest employers should prepare for greater oversight in using automated tools
On July 5, a new law came into effect in New York City, regulating employers’ use of “automated employment decision tools.”
Local Law 144 requires that candidates or employees who reside in the city be notified about the use of such tools in the assessment or evaluation for hire or promotion, and be notified about the job qualifications and characteristics used by the tool.
Employers must also conduct a bias audit on the tool prior to its use, and any violations of the provisions of the bill would be subject to a civil penalty.
While obviously it’s a development south of the border, that doesn’t mean Canada won’t see similar regulation on the horizon.
In fact, Quebec has new rules around “automated processing” that come into effect in September, while the federal government is busy with proposed legislation such as the Artificial Intelligence and Data Act (AIDA) and changes to the Consumer Privacy Protection Act (CPPA).
In addition, of course, privacy laws and human rights laws already apply — so the time for HR to prepare is now, say experts.
Quebec’s changes around ‘automated processing’
As of Sept. 22, 2023, employers in Quebec will face new rules thanks to the passing of Bill 64, An Act to modernize legislative provisions as regards the protection of personal information.
This states that any employer that “uses personal information to render a decision based exclusively on an automated processing of such information must, at the time of or before the decision, inform the person concerned accordingly.”
The individual must be told about the personal information used to render the decision; the reasons and the principal factors and parameters that led to the decision; and the right of the individual to have the personal information used to render the decision corrected.
Quebec's law is about automated decision-making, where there is no human involved in the process, says Robbie Grant, an associate in privacy and data protection at McMillan in Toronto.
“It's kind of tricky. If you're not informing them of the decision then, arguably, you're not required to give them any additional information. But if you do tell them of the decision — ‘Hey, we're not hiring you’ — you also have to say, ‘And it was because of automated processing, and here's all this other information about that.’”
But there’s still somewhat of an open question as to whether there can be an implied duty to inform, he says, based on a separate requirement that the person must be given an opportunity to submit representations to a person in a position to review the decision.
“They need to be given a chance to submit observations to someone. And if they don't know of the decision, and they don't know that the automated tool was used in the first place, how can we say that they've been given a chance to submit representation? So I think there's still room for the Quebec regulator to create more obligations here,” says Grant.
It’s also more likely that employers and recruiters going to be keeping a human in the loop, he says, “because that's the other way you can avoid these disclosure obligations — if there's a human with a meaningful role in checking the AI decisions, then... these disclosure obligations will not apply to you.”
Federal focus on AI: CPPA
Additional requirements for automated tools around employment are also set out in a draft of the Consumer Privacy Protection Act (CPPA), and the proposed Artificial Intelligence and Data Act (AIDA), both part of Bill C-27, according to Grant.
For the CPPA, it would only apply to workers in a federally regulated workplace, or independent contractors in provinces without substantially similar privacy legislation, he says.
At this point, the act defines an “automated decision system” as “any technology that assists or replaces the judgment of human decision-makers through the use of a rules-based system, regression analysis, predictive analytics, machine learning, deep learning, a neural network or other technique.”
The CPPA would require organizations to make readily available information about the organization’s use of automated decision systems “to make predictions, recommendations or decisions about individuals that could have a significant impact on them.”
Those laws are broad because they target AI systems that influence the hiring decision or make predictions or recommendations, says Grant.
“But then they're narrow because they only apply to decisions that make a ‘significant impact’ on a person.”
Unlike the Quebec act, the CPPA also puts the responsibility on applicants or employees to request further information about an employer’s use of automated decision systems.
However, there are questions about how the requirements will apply to the employment relationship, and to employers, he says.
“Typically, federal privacy law doesn't apply to provincially regulated employees. So it's really important to get proper legal advice in this space, because you don't want to miss a law that you need to comply with; and, conversely, you might not want to be complying with all the laws if some of them don't even apply to you.”
Federal focus on AI: AIDA
The regulatory system proposed in the AIDA is meant to “guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses,” according to Ottawa.
Building on existing Canadian consumer protection and human rights law, it would ensure that “high-impact AI systems” meet Canada’s expectations around safety and human rights.
For example, in looking at screening systems that “are intended to make decisions, recommendations, or predictions for purposes relating to access to services, such as credit, or employment.”
With AIDA, Ottawa has indicated that it plans to regulate AI based on the impact the tech will have, says Susie Lindsay, counsel at the Law Commission of Ontario.
“If you're a high-impact system, you will be regulated; if you're not a high-impact system, there's not a lot you have to do.”
But what’s meant by “high impact” will be determined in the regulations, she says.
“If you're high impact, you will have certain mitigation obligations; they could make an audit part of that mitigation obligation — we just don't know.”
The government said it will consider several factors in determining which AI systems would be considered high impact, such as “a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences,” the scale of use, and “the extent to which for practical or legal reasons it is not reasonably possible to opt out from that system.”
AIDA is also trying to align with the Human Rights Code, factoring in bias and discrimination, says Lindsay. But unlike New York’s legislation, there are no ex ante-like proactive requirements on entities that want to roll out AI right now, she says.
“We don't have a law saying that you have to go and audit your system before you deploy it.”
Human rights considerations with AI systems
Despite newer legislation looming on the horizon, employers and HR should be aware that privacy and human rights legislation already apply when it comes to AI systems, says Lindsay.
“AI systems are designed to be biased — it's not a flaw, it's how they are. It becomes an issue with human rights when they are biased or discriminatory on the basis of protected grounds. And the concern is that these systems, the way they work, there's a lack of transparency, there can be a lack of disclosure, we don't necessarily know where or when they're being used.
“All of that means that discrimination can be embedded and hidden in the system without people knowing that it's there. That's where the concern comes.”
AI systems can be biased because of the data that’s put into them — as seen notably when Amazon got into trouble back in 2018 — or, intentionally or unintentionally, with biased metrics. There’s also the issue of indirect discrimination, which means that if you roll out a new rule, it may have a bigger impact on some people based on their protected grounds, says Lindsay.
“As an employer, you need to make sure if you're relying on an AI system, that it is not discriminating based on protected grounds,” she says.
“As a result, any company that wants to use an AI system in Ontario and Canada needs to be assessing it before they start deploying it.”
That means running an audit to understand the outcomes, says Lindsay.
“And they need to be looking at the metrics of fairness and how to assess what the outcomes are on this on the system to ensure that they are in compliance with our existing laws.”
Plus, employers and HR can't outsource their human rights obligations, she says.
“If you go and hire an AI system from an American company that hasn't considered this or their laws are different, sure, you could add them to a claim, but you can't say, ‘Well, it's not our fault’... a Human Rights Tribunal is going to hold the employer accountable. They're the ones who are ultimately responsible for it.”
There are risks for employers, especially with human rights-related complaints, says Grant.
“The hard part for the employees or applicants is that it can be really hard to get any information whatsoever about how these tools work… So it's really not transparent. And in one sense, that reduces the risk of a complaint, because people don't know about it. But that is going to be going away, as we get more transparency requirements online. So as we have more requirements for explainability, you're going have to present to applicants, depending on what laws apply, the factors and parameters that led to the decision,” he says.
“And that means a little bit more insight into how these tools are working, which might reveal some more bias. So I'd say there are human rights concerns now.”
Takeaways for HR in using automated systems
The use of automated processing or AI in hiring has been around for a long time, and it’s not going anywhere. This is a huge business and highly valuable to many employers.
Take a company like the global conglomerate Unilever, for example, which gets thousands of CVs in a week, says Pamela Lirio, associate professor, HRM, at the Université de Montréal.
“It doesn't matter if you're that big of a company with deeper pockets — you can't hire enough HR staff to manage that. So these systems help them filter that and get people through.”
But it’s still important to be responsible, she says.
“I advocate the responsible use of AI, which includes ethical thinking and frameworks around the technology when it's built, but also in society, having IT in organizations, having policies about how it's implemented, and in greater society having regulation and potentially laws around it, as well.”
If you're going to implement an AI tool, you need to do your homework, and make sure that it's audited, says Grant, “particularly if you have a complicated sort of blackbox AI system where you don't know the exact inner workings — the only way to really detect bias is to look at all of the decisions that it's made, to really do a full-scale review.”
With, proper vetting, it’s about: taking an inventory of what AI tools you're using; what information is processed and how; and proper analysis of your legal obligations, he says.
“Do you need to get consent? If the data is being used to assess the candidate, well, maybe they would expect that. But if it's being used to train the AI model, maybe you need more consent? Or you need actual consent for that, express consent for that? So there is a need to do a privacy impact assessment on top of the assessment of the AI tool for bias.”
Due diligence around the tools should also look at: “Are they explainable? How do they approach the problem of bias? How are the systems trained? How are they monitored? Are they developed with reference to any standards or codes?” says Grant.
“There are a lot of places where this can go awry.”
It’s also really important to keep people in the HR process, in the recruiting process, says Lirio.
“The final decision I'm recommending, and most of the HR leaders I talked to, they say the final decision does rest with them, with the people — that's the best way to practice HR because there's certain things that these algorithms and platforms, or the artificial intelligence expression of that, is not able to replicate, as well as the human brain that hears and thinks and feels everything.”