Can AI agents be effective mentors? Experts urge caution

‘If we can't hire everyone, a mentor or coach, can AI help take on some of that?’: experts explain the risks and benefits of leadership training with AI agents

Can AI agents be effective mentors? Experts urge caution
L: Serena Huang; r: Mohammad Keyhani

AI agents – advanced AI models that can act autonomously with limited human instruction – are emerging as the potential next level of workplace training tool, potential teammate and cost-cutting strategy.

But should AI agents be used to train an organization’s future leaders on management, mentorship and coaching skills?

As generative artificial intelligence (GenAI) becomes more sophisticated and ubiquitous and the skills of entry-level employees evolve concurrently, Canadian employers are exploring how AI can be used for training, including integrating AI into leadership development programs. The strategy has obvious potential for increased efficiency and budget cuts for HR leaders, but it also carries risk. 

To break this down, Canadian HR Reporter spoke with two experts working at the forefront of AI workplace integration: Mohammad Keyhani, associate professor of GenAI at the University of Calgary’s Haskayne School of Business, and data scientist and author Serena Huang, founder of Data with Serena. 

AI agents as trainers: the potential and the limits 

A recent KPMG survey found that while 73 per cent of Canadian students use GenAI for schoolwork, nearly half say their critical thinking skills have deteriorated since they started using AI, and 65 per cent report that their peers rely on AI to avoid critical thinking altogether. 

With new grads arriving in entry-level roles with few social and communications skills and experience due to increased reliance on AI, the pressure is on to upskill these recruits quicker than ever.  

Managers are busier than ever, Huang says, and learning and development teams are stretched thin.  

“AI agents are all the rage,” she says.   

“After ChatGPt made us realize the capabilities of AI, a lot of organizations are starting to think, ‘How do we develop the next generation of talent? How do we do coaching? And is this something AI can take on?’”  

AI agents can be useful in providing early employees with crucial communication skills that are usually only learned in-person, she says, pointing to conflict resolution as an area where AI agents could be effective.  

“Let's say we use it to simulate a situation [such as] you are in a heated conversation with a customer who wants to return something that is outside of policy, and you need to de-escalate a situation,” she says, noting that AI agents can provide employees a safe, controlled environment to practice difficult conversations or decision-making.  

“An AI agent could potentially simulate a lot of those scenarios for you, without you having to actually go through a human interaction, and then learn from that and then give you coaching on ‘Hey, that wasn't very empathetic. Let's try that again.’” 

These simulations could be repeated as often as needed, she adds, allowing for iterative learning and immediate feedback. However, she stresses that the effectiveness of such training depends on the quality of the scenarios and the relevance of the feedback provided by the AI. 

The appeal of AI agents as training tools  

Keyhani explains that an AI agent is “basically a large language model that's given a broad goal, and within that broad goal allowed to make its own decisions and use tools. It can go off on its own and do things.” 

Unlike traditional chatbots, which require constant prompting and can only perform limited functions, agentic AI systems can pursue complex objectives, make independent decisions, and use a variety of digital tools to achieve their goals.  

This increased autonomy allows AI agents to simulate real-world tasks and challenges that managers might face, making them potentially valuable for leadership training.  

However, he emphasizes the importance of the human aspect of leadership training and mentoring, and the risks of giving those responsibilities to AI. 

“These are things that are very deeply human, and I think organizations can easily make the mistake of trying to automate things that people really expect a very strong human element in,” says Keyhani. 

“I think that's a mistake we need to avoid. There are probably parts of it that you could automate and give to AI, and maybe create simulations. But ultimately, for something as deep and human and emotional and complex as leadership, you want humans involved.” 

The importance of thoughtful design and human oversight 

Both experts stress that AI agents are most effective when used for targeted, specific skill development, rather than as broad replacements for human mentors.  

Huang explains that AI agents can be useful as “mini teammates” in specific scenarios for specific skills, but not for broad responsibilities or influence – partly because of the tendency for AI chat tools to be positive to a fault.  

“We know growth doesn't happen in comfort. Part of what AI agents will have a hard time doing, and for especially early-career employees, is that that push to sit outside of their comfort zone and challenge them, as opposed to being overly agreeable and encouraging,” Huang says.  

“This is why a lot of us love talking to AI, and some people even get addicted. But if the AI agent isn't set up with those specific responses to continue to challenge an individual, it can almost backfire and continue to reinforce behaviors that are not helpful.” 

AI agents training employees: risks, guardrails, accountability 

Huang highlights the risks of poorly designed AI agents, and recommends starting small, testing on a few use cases, and measuring outcomes carefully rather than making big moves too quickly. 

“Test it on two-to-three specific use cases and see if you're getting the results you like,” she offers.  

“Employees who are getting AI agents to help them with communication skills, for example – do they get better in three months?” 

HR leaders must ensure that employees understand when and how to use AI agents, she adds, and that there are clear escalation paths for issues that require human judgment. This also means regularly reviewing and updating AI systems, as they evolve quickly with use and can evolve away from company values or goals, “And I don't see a lot of organizations willing to do that type of proper measurement work because the AI hype is just so loud.” 

Keyhani recognizes the potential for agents in organizations once they are properly designed and set up with the appropriate tools and knowledge, and as the technology continues to improve on accuracy and detail.  

“They'll be able to act in ways that are really helpful. They can become instructors for organizations and train people in various ways,” he says. 

“I think there's huge, huge potential in learning, training, simulating situations for training with these LLMs.” 

Echoing Huang, Keyhani also stresses the importance of going slowly and testing before large implementations, focusing on smaller tasks and uses before widening scope.  

He also urges employers and HR professionals to use the tools themselves as an important starting point of overall organizational understanding.  

“Right now, I think there's just a lack of exploration. People need to go play around with the different tools,” he says.  

“The only way to do it is really to play with them and see what they're like. These new LLMs are like creatures who've entered our society and we have to understand how they work. I think they're not fully machine and not fully human, and we have to understand them as some third category.” 

Latest stories