EU passes AI Act and Canada’s AIDA is close behind – is HR ready?

AI rules represent 'material shift in risk landscape,' employment lawyer warns

EU passes AI Act and Canada’s AIDA is close behind – is HR ready?

The European Parliament has passed its long-awaited AI Act, making them the first comprehensive set of laws from a major regulator on the use of artificial intelligence (AI) in the world. When the act is officially confirmed as law, it will regulate AI technology use in the 27 nations of the EU.

The legislation will also regulate global entities who operate in the EU, which means Canadian companies with large workforces should take note of this — along with upcoming Canadian AI legislation, says Kirsten Thompson, partner and head of privacy and cybersecurity at Dentons.

“The act has extraterritorial scope and will apply to certain organizations operating in the EU or providing AI system products or services to users in the EU, even if the organization is based outside the EU. The territorial scope is much broader than under the GDPR,” said Thompson, citing the General Data Protection Regulation.

“Once an organization understands its role, it then needs to assess what the system does in order to determine if it is prohibited, high-risk, or general purpose. An organization’s obligations will flow from there.”

Canadian AIDA similar to EU’s parliamentary AI Act

Similar to Canada’s Artificial Intelligence and Data Act (AIDA), the EU act outlines high-risk systems of AI, which include common employment uses already being deployed by organizations in Canada.  

The EU’s AI Act bans outright the use of emotion recognition in workplaces and schools, biometric categorizing, social scoring and AI that manipulates human behaviour or exploits vulnerabilities.

Organizations that fall under the EU AI Act will need to evaluate applicable AI systems to understand their responsibilities and obligations under the EU AI Act.

It’s a risk-focused, strict approach to the law, Thompson says, while Canada’s approach will be more “principles-based” which allows for more flexibility in interpretation.

“For the most part, conceptually, they're aligned, but AIDA really is just a framework,” she says. “I would hope that when it comes time to draft the regulations, there'll be a proactive effort to harmonize them with global norms that have evolved by then … the EU is famous for having very prescriptive, very onerous obligations, and I know Canada has said on a number of occasions, including with its privacy legislation, that that is not the way to go.”

‘Cottage industry’ of AIDA compliance

AIDA’s proposed regulations will largely apply to what it calls “high-impact AI systems”, essentially the same concept as the EU AI Act’s “high-risk” category. Canada’s new rules have not yet been passed, but employers should ensure they are meeting compliance requirements now, Thompson says.

A major concern for HR is the growing trend of using AI technology to aid in recruitment, hiring, and other forms of employee screening, she says. These are all areas that fall under AIDA’s high-impact category due to the high risk of bias and other possible privacy or human rights violations that can be built into the systems without the knowledge of employers or even the developers themselves.

“AIDA is pretty bare bones, it's more of a framework piece of legislation, and a lot of the meat of it was left to future pending regulations. The law hasn’t been passed yet, so I imagine that regulations would take even longer,” says Thompson.

“We're already seeing sort of a cottage industry of people who purport to audit for bias. With any new industry, I expect some of them are very good, and some of them are probably very awful.”

Asking right questions about AI to lower risks

A large part of the problem is that employers don’t know the right questions to ask, she says. This is a potential exposure to a double whammy of risk, because with the EU AI Act in force and the Canadian AIDA legislation soon to follow, employers can potentially violate not only AI regulations but privacy rules as well.  

The best course of action is to consult with an experienced employment lawyer, she says, who knows the correct language around AI and potential risks for bias, rights violations and privacy laws.

“Definitely in the HR context, you have dual tracks of potential risk here, you've got one under the pending AI act; you’ll probably have another one under privacy laws. The privacy laws that have passed in Quebec and will soon pass federally have very significant fines and penalties attached and applied to violations. So this, for many companies, is a material shift in the risk landscape. So they need to pay attention to that.”

Latest stories