Air Canada chatbot misstep a signal for employers to take heed of upcoming AI laws

'Organizations need to be aware of what they're signing up for': lawyer on impending AI legislation and what HR should know

Air Canada chatbot misstep a signal for employers to take heed of upcoming AI laws

A recent small claims court case that saw Air Canada attempt to lay blame on its own chatbot for wrong information given to a customer, is an indication that employers should be paying attention to Canadian AI legislation as it makes its way through the Senate.

While the settlement was small — $812 to cover the initial oversight – it’s significant because it’s the first case of a public trial relating to misinformation given by a chatbot.

Until Bill C-27 and the Artificial Intelligence and Data Act (AIDA) come into effect, litigation such as this will likely increase, said Maryam Zargar, partner at Miller Thomson in Vancouver.

“As the technology evolves, it's going to be found more and more in areas that we weren't using them before,” said Zargar. “Just as the evolution of technology and the use of gen AI expands into our world and into businesses, obviously there will be legal issues, or business issues, and human issues that we’ll need to learn and adapt to.”

AIDA legislation will affect employers using AI in hiring

As the decision demonstrated, organizations are clearly accountable for the technology they employ, including chatbots. This will be even more enforceable once AIDA comes into effect as part of Bill C-27. AIDA will place laws around how employers will be expected to use and monitor AI technology.

The legislation, which has passed the second reading in the House of Commons and is now in committee consideration, identifies certain “high-impact” uses of AI. It is projected to come into force in 2025 at the earliest.

The Act defines high-impact uses as areas where AI technology could potentially cause harm to individuals or groups or violate human rights. In particular, AIDA points out screening systems, such as during hiring, that make decisions or recommendations for access to services such as credit or employment. These systems are high-impact as they can result in discriminatory outcomes or economic harm, especially to women and other marginalized groups.

“As more and more companies adopt this model and start to integrate it into their engagement or recruitment processes, the biggest aspects that keep coming up are privacy and bias,” said Zargar. “The trick, I think, is to figure out how best to leave what technology can do better than humans to technology, and then how to leave to humans things that are unique to humans, and that’s the balance.”

Accountability from third party AI developers

In September 2023, the Department of Innovation, Science and Economic Development Canada published its Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. The code outlines specific measures that organizations should apply in advance of AIDA coming into force.

“Where I see organizations go wrong the most is that enthusiasm to embrace a new technology without really understanding the questions they need to ask,” said Kirsten Thompson, partner at Dentons in Toronto. “We're starting to see chatbots and automated decision-making being deployed in the HR context, onboarding and hiring, application screening, that kind of thing. You've got to be careful you’re not perpetuating bias.”

Thompson added that although AIDA hasn’t yet been passed into law, organizations should be getting serious now about who is developing, testing and training their AI.

“The law hasn't yet passed, but you could run afoul of human rights legislation if you don't get that right,” she said. “So it'll be very important for organizations that are using and buying these technologies to make inquiries about how the machine has been trained.”

Asking questions such as what data set was used, what type of model it’s using, how has it been adjusted for bias, and what types of biases have been monitored, can reveal to HR professionals if the technology they are about to buy will inject bias into their hiring processes.

Due to the lack of AI legislation, as well as the relative newness of the technology, many companies are purchasing and using AI tools from developers that are also in new territory.

“A lot of developers are not multinational global organizations with huge compliance departments, they are often one or two folks who've come up with a neat idea and are using open-source software off of GitHub to make whatever their chatbot or AI does,” said Thompson.

“Organizations need to be aware of what they're signing up for, and there's different types of bots. Ultimately, the organization is responsible, at least to the public, for making sure it says the right things.”

Best practices for AI tech: testing and training

The AIDA legislation lays out clear principles by which high-impact employers will be required to mitigate their own processes for risk of bias or privacy violations, including oversight, monitoring, transparency and accountability.

Organizations introducing AI tools into their systems should ensure there is someone on staff who knows how to test, use and explain the technology, Thompson said.

Also, there should be contingency plans for when the tool fails, such as what mechanisms are in place to handle complaints. Developers should also be able to provide complete data sets on testing and past troubleshooting.

“So you have somebody on staff whose job it is to evaluate these things periodically and make sure that the information that it's using is accurate and that it's using it in an appropriate way,” said Thompson.

“Chatbots need to be trained. If you're an organization that's going to use one of these chatbots, you need to take a close look at how the thing is trained. So what information is being used to train it? How frequently is it updated? Who's responsible for making sure that its outputs match its inputs, so it's doing what it's trained to do, and it doesn't go off on a frolic of its own, particularly if it's using generative AI.”

Latest stories