How’s your ‘AI at work’ plan going? Not so well? You’re not alone

‘Massive disappointment’: Latest surveys reveal grim realities of new tech as frustrated employees walk away from workplace AI tools despite leaders’ optimism

How’s your ‘AI at work’ plan going? Not so well? You’re not alone
L: André Côté; r: Paul Varella Connors

While business leaders continue investing millions in digital transformation and AI enterprise products, employees are growing increasingly disillusioned, burned out and frustrated with short-sighted plans and tools that aren’t delivering the efficiencies promised. 

That’s according to a recent survey from Korn Ferry that shows nine out of 10 AI users have abandoned tech and returned to non-AI methods "at least once" out of frustration. 

Similarly, WalkMe’s “State of Digital Adoption” survey reveals the growing “expectation gap,” with 88 percent of executives saying their employees have adequate AI tools but only 21 percent of workers agreeing with them.  

There is also evidence that some employees are engaging in “sabotage” of their company’s AI platforms; for example, by deliberately entering sensitive information into public tools. 

According to André Côté, executive director of policy and research and head of the Secure and Responsible Tech Policy program at the Dais at Toronto Metropolitan University, not only are these results unsettling, they’re also unsurprising, considering the breakneck pace of adoption. 

“You see a lot of businesses without any real sort of concrete AI strategy,” he says. 

“[They are] scrambling to purchase access to these tools at very high prices, enterprise packages, deploying across large workforces at extremely high price points and then not doing very much to equip workers to actually understand how to use them, how to integrate them into their business processes.” 

Dangers of top-down AI plans 

The newest reports reveal serious mismatches in perception, expectations and reality; although C-suite leaders continue to promote and push the AI agenda within their organizations, including layoffs and promoting AI “super-users” over “laggards”, AI platform Writer reports that almost half (48%) of the executives they surveyed said AI adoption at their companies has been “a massive disappointment.”  

Over half (54%) of the same executives shared that “adopting AI is tearing their company apart” and it has caused internal power struggles and disruption. Writer notes that this is a significant jump from last year’s 42 percent. 

For Paul Varella Connors, organizational studies professor at Mount Royal University, this is playing out against a broader erosion of institutional trust, with the C-suite using entirely the wrong playbook; he likens the current approach to the CEO of Ford attempting to build a car on the shop floor. 

“They're making a corporate decision to go big on a technology that's meant to work at the shop level — that the shop level is not ready to use,” he says. 

“It never works top-down. There should be emerging strategies … that come from the guts of the organization.” 

Misperceptions about workplace AI tools 

A big part of the breakdown is that many leaders don’t actually understand what they are buying – or what they are asking employees to use.  

The Korn Ferry report notes that typical users expect a satisfactory answer within two prompts and often assume the first output “should” be great, even though the average user earns only a C‑grade (57/100) on prompt quality.  

Varella Connors says that expectation of instant, almost magical performance demonstrates a serious misunderstanding about what generative AI is actually capable of – which he describes as “Google on steroids.” 

“[It’s] kind of a spreadsheet on steroids. It doesn't do beyond what’s there,” he says. 

“There's this huge disconnect [between] what the technology is meant to be and what actually the people who are implementing it are doing.”  

When leaders frame AI as a “thinking entity” rather than a statistical tool, they set the stage for disappointment on both sides. Not helping the situation, Varella Connors adds, is that many of the tools are “attuned to talk to us”, giving the impression that they are thinking about problems when in fact they are not. 

The “authoritative” tone many LLMs convey only deepens the confusion; for Côté, this tool mythology is showing up in how organizations structure their AI programs, and could prove to be a fatal flaw. He urges leaders to flip their approach. 

“We're in this phase now where it's like we're so eager to adopt the technologies that we're trying to reverse engineer it for business problems, and that's the completely wrong way to do this,” Côté says. 

“I think the right lens is these are a new set of compelling tools for the toolkit. Think about the business challenges that you have, and think about where these tools might be effective in supporting you, and where they might not be. Start problem-first, as opposed to tool first.” 

Why employees are pushing back or walking away 

Côté identifies three current classes of employees:  

  • a middle class who see the benefits but don’t have the time to fully engage with them and do so when they can 

  • a third group saying, “Wow, pretty transparently, my employer is deploying these tools with a medium- to longer-term objective of basically automating away a lot of either the tasks or the jobs that I might be related to.” 

survey of knowledge workers by Workplace Intelligence for Writer found that a high percentage of leaders (92%) are actively changing roles and structures around AI and are cultivating “AI elites” – they also acknowledge putting more pressure on their employees.  

Unsurprisingly, this pressure is being felt by staff, who report they notice a two-tier hierarchy emerging at their workplaces. Varella Connors says these conditions are the direct cause of employee backlash, not the technology itself. 

“They’re all saying they want to cultivate a class of ‘AI elite’?” he says. “Oh, man, it cannot be more destructive of internal culture than that.” 

Varella Connors also highlights the value of intellectual capital – the knowledge and judgment embedded in people and culture – which is at risk when AI is framed mainly as a cost‑cutting tool. 

“It seems to me that the problem is more associated with a culture issue than a technology adoption issue,” he notes. 

“That trust and culture of the organization are really the ‘noise’ in the communication process, between top management and people working for them.” 

Workplace AI planning: what to do differently 

Both experts stress that employers need a clearer, more transparent approach if they want workers to engage with AI rather than resist it. Côté recommends three starting points. The first two? Clear, public plans and transparent communication with employees, and equipping workers with AI tools and governance frameworks “to encourage responsible experimentation and to track outcomes, track successes, failures.”  

The third key element for Côté lies in prioritizing deployment strategically. 

“Start with areas that are high volume, low risk, repetitive tasks, where the impact on workers is more to augment their roles, to help them, than to replace them,” he says. 

“Because if what you're trying to do is help to bring your workforce along around the deployment of these tools, probably the worst thing you can do is start by signaling to them ‘The core objective here is efficiency and essentially replacing you or parts of you, and the core places we're going to start deploying them are the places you are most vulnerable to being replaced.’” 

Latest stories