New guidance from Quebec signals future direction for AI

Recommendations from CAI cover AI disclosure, fully automated decisions, algorithmic impact assessments and high-risk use

New guidance from Quebec signals future direction for AI

“These are not meant to be necessarily dissuasive or discouraging for employers that are considering these types of technologies.”

So says Alexandra Quigley, in discussing a recent submission from the Commission d’accès à l’information (CAI) on best practices for organizations using AI that could apply not only in Quebec, but “elsewhere in Canada or elsewhere in the world,” she says.

In making recommendations on the use of AI in the workplace, the CAI is sending a clear message to HR professionals and employers: the regulatory landscape is evolving, and organizations need to be ready:

“AI can bring significant beneficial transformations not only for employers but also for employees. If the conditions for its deployment are favorable, it can be used as a beneficial lever for work,” it says (translated). “However, it is important that significant safeguards are put in place to avoid certain major risks.”

Best practices for AI

While the recommendations aren’t binding law, they provide valuable insight into where privacy regulators may be heading and best practices that could soon become standard across Canada, says Quigley, a senior associate at Dentons in Montreal.

The guidelines are valuable for many employers struggling to translate and apply vague regulations related to AI, she says.

“They give us quite a bit of insight into the Quebec regulator’s position on what this should look like day to day for organizations.”

Julie Himo, a partner at Torys in Montreal, agrees it’s good guidance.

“It gives us a sense of where the CAI is heading in terms of regulatory positioning, and it'll help employers — who are all considering what AI tools they want to use in their day to day — to identify where the issues are, what they would be if they decided to use an AI tool and what risks they could be running by using these tools.”

Significant safeguards needed

The CAI’s brief highlights the growing use of AI and surveillance technologies in the workplace, from biometric time clocks to productivity monitoring software and algorithmic management systems. These tools can bring efficiency and innovation, but they also raise significant privacy and ethical concerns, it says.

“As the European Union notes in the preamble to its recent Artificial Intelligence Regulation, AI-based decisions can have ‘a considerable impact on people’s career prospects and livelihoods as well as on workers’ rights.’ For this reason, this regulation treats algorithmic management systems as high-risk AI systems, with specific restrictions.”

When it comes to employee privacy, the CAI says these systems can pose several issues around transparency, discrimination, proportionality, data collection and secondary uses.

“Finally, the use of AI systems could also have more collective consequences. For example, algorithmic management should not become so extensive that it alone dictates nearly all work parameters, conditions, schedules, or pay.”

The Commission says that various measures could be introduced into labour or privacy laws to better regulate AI use in the workplace and encourage responsible adoption of this technology: “These measures mainly concern transparency, sound AI governance, and the identification and prohibition of unacceptable uses.”

Disclosing use of AI at work

The CAI makes several recommendations in its submission to the Ministry of Labour.

For one, it recommends employers disclose in their privacy policy any use of AI or surveillance technologies, specifying factors such as the providers’ names, purposes pursued, data involved, anticipated impact on the rights of affected individuals, and how results will be determined and used in decision-making.

“The opacity surrounding AI systems is a major problem to be addressed,” says the commission. “To supplement the basic obligations in privacy laws, legislation should provide for proactive and reactive transparency obligations so that affected individuals can know which AI or surveillance systems are in place and be informed as soon as possible that at least partially automated decisions may be made about them.”

Quigley says that kind of transparency could be quite reassuring and useful for their employees.

“The CAI is encouraging transparency so that an employee will be aware if their personal information is being used in the context of an automated decision: What does it involve? What is going on? The underlying message of these guidelines appears to be ‘transparency, transparency, transparency.’”

How is AI used for decision-making?

The CAI also urges employers to disclose “the intention to use partially or fully automated decisions in the workplace … as soon as it is confirmed.”

The commission is encouraging employers to go further than the requirements under current legislation, says Himo.

One challenge is promoting the transparency and explainability of the AI system and its use within the organization, she says.

“It can be challenging to distill, into simple, easy-to-understand language, the complicated and integrative process of collecting data, inputting it into an AI system, producing an output, and using that output in organizational decision-making process.”

That process may be daunting because it can require quite a bit of understanding and analysis, says Quigley.

“But, in the end, it can actually be quite beneficial for an organization, because it can incite them to challenge and question their third-party service providers, whether it's internal or external, people that are suggesting new tech using AI — it's good for them to know how the sausage is made, what goes behind it.”

Employee notification and involvement

The CAI also recommends “mandatory employee involvement in the algorithmic impact assessment process,” arguing that employees are well-placed to identify risks and propose mitigation measures.

Employee engagement will increase the quality and accuracy of any impact assessment, says Himo, “and also early involvement will most likely increase the transparency about how and why AI is or will be used within the organization.”

“As the CAI noted, it would be difficult for employees to understand the rights when they are insufficiently informed about how these systems will operate. So, I think it's a good, suggestion, and then the devil will be in the details as to how you integrate employees and how you get their input.”

It's something that employers should probably be doing otherwise, says Quigley, agreeing that “employee involvement from the outset can, in theory, help save [employers] a few headaches down the line.”

Algorithmic impact assessments—beyond privacy

The CAI also recommends that employers conduct algorithmic impact assessments (AIAs) for any AI system making partially or fully automated decisions, broadening the scope beyond traditional privacy impact assessments (PIAs).

An algorithm impact assessment is similar to a PIA in the sense that it's used to identify, classify and mitigate risk, but it's typically broader, says Himo.

“It will consider a wider range of impact beyond privacy-related issues like discrimination, bias, human rights, etc. So, an algorithm impact assessment typically will include a series of steps, which include: what the AI system is, what it does, its intended uses and potential misuses, also what data is being used, and who are the stakeholders and potential impacts on

There are already specific provisions around transparency in Quebec’s legislation related to fully automated decisions, says Quigley, but it appears the CAI is being more specific.

“We see it oftentimes with Privacy Impact Assessments, which is: What personal information, if any, is actually being used? Where does it come from? What are the risks? What mitigation measures are being put into place to handle those risks and so on?

“I think [this recommendation is] part of encouraging organizations to do that in an AI setting.”

High-risk uses, unacceptable purposes for AI

The CAI’s recommendations also address high-risk uses of AI, such as emotion analysis, biometric categorization, and fully automated decisions with significant effects on employees:

“To prevent the proliferation of harmful systems and associated recourse, the commission also considers it important to clearly prohibit in legislation certain uses of AI that present serious risks to employees’ fundamental rights or raise doubts about their reliability.”

These high-risk systems are based on inferences from behavioural or physical data and concern intimate privacy, says the CAI.

“Their reliability is highly questionable, as emotions are not expressed the same way across individuals, contexts, and cultures. They can thus lead to discriminatory treatment based on inaccurate inferences. Their relevance in the workplace is also highly questionable, when discussions between managers and employees seem much more appropriate.”

Himo notes that there is “increasingly international consensus that certain uses of AI raise disproportionate risk,” as seen with the European Union’s AI act.

“Those seem to be unacceptable purposes for AI that the CAI is concerned with.”

Fully automated decision-making in the employment context also has a significant impact on employees when it's used for deciding whether to terminate an employee, for example.

“In looking at what's going on on the international scene, I think they're being very cautious about these sensitive topics,” she says.

When considering what measures to take, companies can look at the types of mitigation and monitoring mechanisms that are set out in the EU AI act, says Himo.

“These include adequate risk assessment, detailed documentation and activity logs, and appropriate human oversight.”

Aligning tech positivity with compliance

Ultimately, the CAI’s recommendations are not about discouraging innovation, but about aligning new technologies with legal and ethical standards, according to Quigley.

And the best way to look at this is case by case, technology by technology, she says. That means asking: “What is the extent of the data being used? How is it being used? How transparent are you being with your employees and data subjects and so on?”

The CAI is not necessarily trying to discourage organizations from using these types of tools, says Quigley, but more looking “to have an alarm go off in their head to say, ‘Let's take a look at this from the outset to make sure that we can use this in a way that is not only better for our employees and our day-to-day objectives, but does not create issues down the line.’”

Latest stories