‘Trial and error’: Over half of AI redundancies a mistake, say leaders

Canadian AI experts explain that ‘everyone is an AI trainer’: how to avoid AI-redundancy regret, including framing reskilling as growth pathway rather than cost-cutting measure

‘Trial and error’: Over half of AI redundancies a mistake, say leaders

Canadian organizations are racing to integrate AI, but experts are warning employers to pause: according to a new report from Orgvue, over half of business leaders who have made AI-related redundances admit they got it wrong. 

Of  the 39% of business leaders who said they’ve made labour cuts because of AI, 55% said it was the wrong decision, according to the survey of over 1,000 senior business leaders and C-suite executives in several countries including Canada 

Fear also dominates the current sentiment of the leaders toward AI: although 80% of them plan to reskill staff for AI collaboration, 47% fear uncontrolled AI use, and 34% report AI-related attrition at their organizations. 

According to Ozgur Turetken, professor and associate dean at the Ted Rogers School of Information Technology Management, the reported leadership missteps are a predictable outcome of a situation that is spinning out of control. 

“The technology is changing so fast that by the time you figure out the capabilities of the technology, that was technology of yesterday,” he says.  

“So, what is it able to do today?” 

Task-level workforce redesign required 

The Orgvue survey reveals that 25% of business leaders don’t know which jobs roles benefit most from AI, and 30% can’t predict which are at risk of redundancy – Turetken explains that this lack of knowledge makes it difficult to effectively strategize. 

He urges a granular approach to task implementation, by mapping each job’s associated tasks and matching them to AI’s strengths, thereby cutting down on blunt staff cuts – and the resultant regret when the wrong people are let go.  

When done right, Turetken says, this process can channel human and AI skills to where they add the most value, increasing efficiency overall. 

“I argue that [AI integration] is not even for a profession or a job. It has to be at the task level,” he says.  

“There are certain tasks that AI will do better – these kinds of tasks could be across the board in multiple industries, multiple professions and different kinds of work. And then, if you understand that, then we can reconfigure our workforce so the people actually do the kinds of things where they can make the most impact.” 

What Turetken – who has been studying AI and human interaction for over 25 years, particularly managerial decision-making and AI –  is suggesting is a wholesale rethink of how jobs and tasks are structured. By breaking roles into task inventories, hybrid positions can be created that blend AI-handled functions – such as data pattern recognition – with human strengths like critical thinking and contextual awareness.  

“We need to look at what does work entail, and how we can break it down, and how we can analyze whose capabilities are a best fit for what,” he says.  

Understanding AI’s hidden workforce – before layoffs 

A common mistake made by business leaders currently working with AI integration is the lack of understanding of the vital role humans play in AI training on-the-job, says Nicholas Vincent, assistant professor of computing science and human-computer interaction at Simon Fraser University. 

Treating AI as a plug-and-play replacement of human labour risks discarding the very labour that sustains model performance, he explains. 

“Modern AI is an extremely data-dependent technology, and it's very dependent, in particular, on a lot of invisible human labour,” Vincent says, stressing that AI systems rely on vast quantities of human-generated data – generated by users such as employees – to function effectively.  

“Modern AI systems like ChatGPT have these interesting dependencies on large online data, so things like Wikipedia and Reddit and online platforms, but they also have some dependencies on what people are now calling ‘post-training data’ or ‘feedback data’.” 

Orgvue reports that 51% of firms are introducing AI use policies, but without understanding the role of human-generated data that continues training the models, those policies will fall on deaf ears. Clear communication invites employees into the AI lifecycle rather than sidelining them, Vincent says. 

“It's really important to measure the value and the dependencies and the impact of data, and communicate that to the public, so that workers and users understand that, in some sense, everybody is an AI trainer.” 

Learning from AI redundancy and automation regret 

The Orgvue survey highlights another emerging contradiction: despite growing confidence – 72% of leaders expect AI to drive workforce transformation for the next three years – leaders’ sense of responsibility for protecting workers has fallen, from 70% in 2024 to 62% in 2025.  

Vincent warns that this lack of responsibility can be a fatal flaw for organizations. 

“There's a lot of open questions in the area of AI literacy, and how to communicate the actual capabilities and the limitations to workers, but also to decision-makers and leaders and businesses,” he says. 

“There’s this whole ethical and moral quandary here … this also explains some of this automation regret.” 

Another problem is simple fear: Orgvue notes that 47% of leaders fear employees using AI without proper controls. To address this, Turetken advises oversight frameworks – approval gates, random audits and “hallucination traps” – to catch errors before they lead to data breaches or other costly mistakes. 

“People should still be in control of what they're doing. But AI can do way more now than even a short three, four years ago,” he says.  

“But obviously there are some shortfalls of AI, especially when it comes to … hallucinations.” 

Governance, transparency and evaluation  

Vincent warns that without rigorous evaluations – or “evals” – organizations risk deploying models that excel on public benchmarks but misfire on proprietary tasks. 

“Evals are hard. It's really laborious to create these,” he says, adding that even when AI models perform well when tested by developers, they can still fail when they’re being used by everyday users. 

“They call this ‘overfitting,’” Vincent says. 

“We're seeing a moment right now where there's a lot of marketing and hype, and it looks really, really good in these controlled environments, but then you roll it out in the real world, and there's all these funky, wacky things happening … the test didn't prepare you for it. Didn't prepare the AI for it.” 

HR should insist IT and data science teams develop domain-specific evaluation suites before scaling any AI solution. An AI ethics committee – including HR, legal, IT and frontline staff – can review training data protocol, evaluation practices and bias-mitigation strategies to ensure transparency. 

Turetken advises caution around management and assumptions that any tech automation will necessarily reduce workloads; in fact, the opposite is true, as replacing any workers with AI will critically change the dynamic of performance management. 

“A good manager typically hires good people, and good people have their experience and their background and education. I know that person A is very well-educated, and he or she has good experience, so whatever advice I get from them, I just take it for granted,” he explains.  

“Can you do that anymore, if one or some of your workforce is actually machines? … What should you be looking out for to make sure that something serious doesn't get broken or miscommunicated or done flat-out wrong? What kind of skills would that require? I argue that it's more skills, not less, for people.” 

 

 

Latest stories