Training AI on top performers creates ‘perverse incentive structure’: expert

Talent pipeline could be eroded, bad habits could be emulated, say Canadian academics

Training AI on top performers creates ‘perverse incentive structure’: expert
L: Charlie Hannigan; r: Terri Griffith

Recently, Vercel, a $9.3-billion cloud platform, announced it had trained an AI agent on its top-performing salesperson and then pared the 10-person team to just one human and a bot.

The move is emblematic of a broader trend in which organizations are leveraging artificial intelligence not just to automate routine tasks, but to capture and replicate the expertise of their most effective employees.

Terri Griffith, Simon Fraser University’s Keith Beedie chair in innovation and entrepreneurship, says the concept isn’t new, in fact the practice of using employee data to train AI is rooted in decades-old methods.

However, today’s technology makes the process far more powerful and accessible – generative AI platforms have accelerated this trend, she says, allowing organizations to capture and replicate the decision-making processes of their talent at unprecedented speed and scale.

“They've gotten to the place where they can track people's work strategies,” Griffith says.

“They know who the best ones are, and then they can get the systems to really come at the problems in the same way that those experts did, except those AI systems are going to do it far faster.”

Shifting workforce and pipeline

This shift is having a direct impact on the traditional career ladder, Griffith says. As AI takes on more of the work traditionally handled by entry-level employees, opportunities for on-the-job learning and advancement may diminish.

“The bar for an entry-level position is moving up,” she says, adding that this change could have far-reaching consequences for how organizations develop talent – without traditional entry-level roles, companies may find it more difficult to cultivate future leaders from within.

“You're bringing somebody into the company, you're training that person to know more about the particular organization, they're gaining skills as they learn throughout the job, and then those people can move up in the organization,” Griffith says.

“But if I'm not offering people that training ground anymore, then I better have a plan.”

Charlie Hannigan, academic director of the AI for Business Program at USC Marshall School of Business, echoes these concerns, adding that mimicking top performers without forethought risks eroding an organization’s talent pipeline.

“Right now, we're fortunate in companies to have really good executives, really good upper-middle-management, really good middle-management, because all of those executives and upper managers and middle managers used to be lower-level employees who learned the systems and rose through the ranks,” he says.

“If you narrow those early parts of the pipeline so it's just a few employees moving up, your probability of running across top talent from those candidate pools is lower, just because the pools are smaller.”

Risks of modeling on top performers

While the logic of training AI on the best employees is compelling, there are technical and organizational risks.

Vercel’s approach was straightforward: engineers shadowed their best sales rep for six weeks, documented every step, and built an agent to mimic the process. The result was an AI agent that reviews inbound messages, filters spam, qualifies leads, and drafts responses, with a human manager providing oversight and feedback.

But as Hannigan explains, there are risks even in emulating top employees – even the best performers have bad habits, and AI systems trained on a single individual or a small group may inherit both their strengths and weaknesses.

“These tools are really good at parroting whatever you train it on. Reinforcement learning in general does this really good job of mimicking its training data and its objective functions,” he explains.

“But it might be the case that your top performer, even though, in aggregate, they perform better than many other people, might have areas of weakness that other people are better than them at, but if you have this unitary focus towards the overall aggregate metric of being the top performer, you might miss the fact that they're bad at this one tertiary job that one of the lower performers actually out-competed them in.”

This risk is amplified by the scale and speed at which AI operates, Hannigan adds – if an AI system replicates a flaw in its training data, that flaw can quickly become widespread, affecting thousands of interactions before it’s detected.

Morale, trust, and incentive structures

Beyond technical risks, there are implications for employee morale and trust. Hannigan explains that when employees see their own expertise being used to train AI systems that could potentially replace them, problems will emerge.

“The idea that doing your work well might be exactly the thing that trains your replacement creates a really perverse incentive structure,” he says.

“Where maybe you don't want to perform as well as you possibly could, because that would make a tool that is better than you eventually. So you're having to play this ‘Nash equilibrium game’ against a tool that's trying to replace you … that's a poor incentive structure. You want to create incentive structures that encourage employees to do as best as they can and flourish and do good work.”

This tension can erode trust between employees and management, especially if the process is not handled transparently. Griffith raises similar concerns about morale and transparency, adding that the practice puts the role of top performers on shaky ground, creating the need for openness about intentions.

“You would have used me as kind of training the trainers in the past, and I would have been an esteemed expert member of the organization, versus something that can now just be replaced," she says.

Staying competitive in AI-driven landscape

Griffith points out another level of consideration for employers – the speed at which workplace AI is developing is creating a sort of arms race situation among organizations, meaning those that don’t innovate fast will fall behind quickly.

She emphasizes that simply automating existing processes or reducing headcount is not enough to ensure long-term success. Employees should also be focusing on how to leverage their newly freed-up resources for strategic advantage.

“If the competitors are all doing this, too, you better be figuring out what you're going to do, besides what you're already doing, to stay away from the competition,” Griffith says.

“People talk about AI as ‘Oh, now we get to do the most creative parts. Now we get to be more innovative.’ Okay, you better start doing that, because there's five companies that all do the same thing a particular company does, and if now they're all leveraging their top folks, they better be coming up with what they're going to be doing with all that additional … intellectual capability.”

 

Latest stories