'This is a good sign for Canadian employers': academic explains that with fewer hard rules, HR leaders will enjoy more freedom but must self-regulate responsibly

In his first public remarks as Minister of Artificial Intelligence and Digital Innovation, Evan Solomon signalled a pivot in the federal government’s approach to AI: a move away from heavy-handed regulation, toward fostering growth and adoption.
Speaking at the Canada 2020 conference in Ottawa, Solomon said his focus is on maximizing economic gains while still addressing core concerns like privacy and data protection.
“My fear is that there are other states that will leapfrog ahead of us on a competitive advantage,” he said in a June 11 BetaKit report. He also stated: “What we won’t do is go into a little black box, do a consultation with 5,000 people, study best practices around the world, come up with the Canadian regulatory solution, and go it alone.”
This approach marks a contrast with the former government’s emphasis on AI guardrails. Under Justin Trudeau’s leadership, Canada signed the first legally binding international AI treaty and proposed Bill C-27, aimed at regulating “high-impact AI systems”.
So is Solomon suggesting that while Bill C-27 is “not gone,” it will be re-evaluated to fit a new, more flexible framework?
What HR professionals should expect
The implications, while significant, are potentially good news for Canadian employers and HR leaders.
As Kirsten Thompson, partner and national practice group lead of privacy and cybersecurity at Dentons, explains, Solomon’s message likely signals a regulatory separation of human-operated AI and AI that acts on its own.
“The likely outcome of this is there will be a distinction between AI that facilitates human-involved interactions versus AI-mediated decisions, automated decision-making,” she says.
This distinction matters for HR use cases such as AI-assisted resume screening, where initial screening by AI independently might be regulated, while the final hiring decision, still in human hands, will not.
“The recruiter takes those [first tranche of resumes], but there's nothing that prevents the recruiter from then looking at the second tranche to just see if something's been missed, or if there's someone who has interesting skills or talents that might otherwise be suitable for the role,” Thompson explains.
“The AI is not making a decision. The decision is still in the human’s hands.”
Flexibility favours smaller organizations
The government’s expected move toward principles-based legislation will also have resource implications – as Thompson explains, the more prescriptive C-27 would be challenging for smaller businesses to comply with, while Solomon’s proposed strategy would leave room for flexible decision-making.
“It's mostly a resourcing issue. Smaller organizations typically don't have the resources for compliance or legal or regulatory,” she says.
“Whereas if you have legislation that says you have to take reasonable and appropriate steps that are commensurate with the risk the technology takes … that organization can say, ‘Okay, well, we're doing really low-risk stuff here, so maybe we have to have a written policy.’ Or ‘Our stuff is really high risk, so we really need to have independent third-party oversight.’”
Solomon’s stated approach appears to favour this flexibility. CTV reported that while speaking at the Canada 2020 conference, the minister compared AI innovation to a “bucking bronco”, adding: “But it is to make sure that the horse doesn’t kick people in the face. And we need to protect people’s data and their privacy.”
According to Mayur Joshi, assistant professor of information systems at the Telfer School of Management, rather than issuing strict prescriptions, Solomon’s strategy appears to support innovation while relying on employers to assess their own risk profiles.
“In my opinion, this is a good sign for Canadian employers, because I'm seeing a subtle shift from a risk-based approach to a more impact-based approach,” says Joshi.
“Thinking about where AI is going to impact more, rather than what are the risks … this announcement just makes the life of the employers easier, in a sense that they are not required to follow very stringent point-by-point guidelines.”
AI governance without checkboxes
That doesn’t mean employers are off the hook. On the contrary, both Thompson and Joshi agree that organizations will need to take ownership of internal AI governance—especially in the absence of concrete rules.
“Assuming we're going with some sort of principles-based or less prescriptive requirements, employers will need to take an approach that is a substantive approach,” says Thompson.
“It's not enough to just say, ‘We think it's reasonable.’ You have to be able to demonstrate that it's reasonable, which typically means you have to have somebody, or a committee of somebodies, to assess why it's reasonable in your particular circumstances.”
Joshi echoed this sentiment, also advocating for internal oversight: “A person who takes the responsibility of building responsible AI models at the company level,” he says.
“You can't comply with something that is too open-ended. What you have to do is own up to the responsibility on your own, which is a more autonomous but responsible approach to AI, or self-regulatory approach to AI.
“They need basically mini-regulators inside the organizations.”
This type of responsibility extends beyond compliance teams into company governance, Thompson adds: “Stakeholders from HR, from legal, from compliance, from employees, should decide on what their AI governance model will be.”
International alignment and legal risk
Another reason employers should self-regulate is legal uncertainty in a fragmented global environment.
“There isn't a global norm at this point,” notes Thompson, adding that Solomon’s strategy appears to be aimed “not to create a patchwork of laws, because that increases the regulatory burden on companies.”
Until federal or provincial regulations become clearer, the safest bet for HR professionals is to align with established international standards, she advises – and the time to begin this process is right now, if they haven’t already.
“Organizations that are using AI or considering using AI should be doing their own internal governance programs regardless,” says Thompson.
“There are international standards they can look to for that.”
Failing to do so could carry significant liability. “If an employer is negligent, then they will face a lawsuit based on negligence, and it'll be a class action, and that tends to be very expensive,” she warns.