‘People who can tell the difference between ‘nearly right’ and ‘right’ are more valuable than ever’: experts explain the realities of how generative AI is hitting employers
When Robert Half asked HR leaders about AI-generated applications, a clear majority said AI-written résumés are making hiring more difficult, not easier – underscoring how fast AI is changing what expertise looks like on paper and how employers need to assess it.
The survey results land in a workplace where generative AI (genAI) has moved far beyond pilots and the experimental phase; Kate Cassidy, assistant professor at Brock University who studies genAI and workplace collaboration, says the pace of AI adoption is unprecedented, with almost half of workers now reporting regular use and writing being the main application.
Among students graduating into the labour market, she notes that awareness and use is “almost at 90 percent.”
AI resumes are a warning sign on expertise
Recently, economist John A. List made comments on X about patterns he’s observed around how professionals are using genAI. Writing about his recent engagements with “non-profits, for profits, and government agencies,” he described professionals presenting AI-generated material that seems airtight until it “crumbles” under pressure.
“I've watched smart people confidently present AI-generated material they clearly don't fully understand,” List wrote.
“The words sound right. But when someone pushes back just a little bit, the sand castle crumbles.”
In his view, AI produces answers that are often “very wrong” or “nearly right,” and the crucial differentiator is the human ability to spot the problems and stand behind the work – the same critical thinking skills, he says, that are needed to create the work in the first place.
When AI first arrived on the scene, I worried it would make economists, or even critical thinkers more broadly, less valuable. In my travels in the past 6 months to work with non-profits, for profits, and government agencies, I have observed how people are actually using AI. I…
— John A. List (@Econ_4_Everyone) March 4, 2026
It is this fact that gives List hope for the future value of human skills, which he says “hasn't diminished with AI. If anything, it's increased. The people who can tell the difference between ‘nearly right’ and ‘right’ are more valuable than ever. The people who can explain the subtle details about something that is exactly right are invaluable.”
Cassidy’s research also highlights how generative AI can quietly interfere with how people build expertise over time; according to her, genAI will “always have errors in it,” but those aren’t only limited to factual errors.
“It can have situational, cultural, historical errors,” she explains.
“I might know a bunch about a topic, but if I'm not taking what generative AI is producing and tweaking it, changing it, pulling it apart, and making sure that it's appropriate for my use, then that's problematic.”
AI changing the pace of progression
For employers, this raises questions about whether staff are still practising the underlying judgment and contextual thinking that complex work requires. In controlled studies, Cassidy has found that the people who benefit most from generative AI are those who already have at least “emerging expertise” – junior employees who are starting to take on more complex work – while senior employees generally have the nuance to adapt what the tools produce.
Less experienced workers can use AI to “produce things that are more advanced, maybe, than their actual skill level,” she says, but left unchecked this can create big gaps in knowledge as they progress up the ranks.
Matissa Hollister, assistant professor of organizational behaviour at McGill University, links these individual patterns to broader changes in how jobs are structured. She notes that the pace of organizational change is now “so mind-blowingly fast that it’s really hard to anticipate what’s next,” and that because of this rate of change flexibility will be a core future skill.
“Even in a given adoption, the implications are much wider-ranging and touch many more jobs than that initial job,” Hollister says.
“Those changes have implications for all kinds of other jobs around them, so those are bigger disruptions … you're going to have to constantly rearrange these job structures.”
AI and job redesign
Hollister explains that many AI tools are now handling exactly the basic tasks that used to be considered entry-level, and that previously would have progressed to more complex tasks and expertise. The “big question” for employers, she says, is how employees can gain experience and organizational knowledge without performing those early-level tasks.
She says job redesign that centres human knowledge and expertise is going to be essential for organizations going forward, as well as maintaining a pipeline of human expertise.
“Whoever has that human expertise now is not going to be there forever,” Hollister says.
“How can you actually design the workflow and the way that the AI system is set up to actually have the human actively engage and even learning and maintaining that knowledge?”
Why loss of expertise is a business risk
For Hollister, the clearest risk from hollowed-out expertise comes from AI’s inability to deal with “novel” ideas or circumstances – basically, anytime conditions change in a way AI does not anticipate, there will be risk.
She explains that machine learning systems including LLMs are trained on historical data; as an example she points to warnings during the COVID-19 pandemic that employers should “expect many of your AI tools to stop working in the pandemic, because they’ve been trained on data understanding how the world works, and suddenly the world has started working differently,” from “supply chains” to demand shocks.
“Humans also don't work well under uncertainty, humans didn't know what to do during a pandemic, either, but we're a little bit better at making educated guesses, at thinking through strategies for dealing with novel context by finding more distantly-related past experiences,” Hollister says.
“That's a place where the systems could fall apart, and then if you don't have backup, if you've now lost all the human expertise with that, then we're really going to be lost.”
Practical steps for HR and employers
Cassidy and Hollister both say employers can act now to protect and grow expertise, even as AI tools become more innocuous. On the hiring side, Cassidy advocates for putting more weight on how candidates think and less on the polish of what they submit, by asking substantive questions.
“Interviews where you’re really finding out how people are processing their thinking,” Cassidy says, emphasizing “reflective ability” as a core skill employers should be looking for. She defines reflective ability as “How does our culture influence what we're doing to get to our outcome? How does my personal world view impact my choice in doing things? Reflection on what you did and why you did it.”
Giving employees the impression that speed is valued more than quality is another risky area, she adds, that will see new employees “blindly accept what comes out of generative AI and just hand that in.”