AI as 'teammate'? Not so fast, say experts warning it could be 'dangerous'

New research says employees are increasingly thinking of AI tools as teammates, but two experts explain why it's too soon – and potentially harmful

AI as 'teammate'? Not so fast, say experts warning it could be 'dangerous'
L: Nicholas Vincent; R: Mohammad Keyhani

New research is shining a light on how the human-AI relationship is evolving in the workplace. While AI is starting to deliver on the promise of productivity gains, that is coming with a price: the most productive AI users are also becoming the most burned out, disengaged and likely to quit.  

The report from Upwork, based on survey findings from 2,500 global knowledge workers in Canada, the U.S., U.K. and Australia, states that “AI is now a teammate, not just a tool,” with 90 per cent of workers viewing AI as a coworker. 

According to Mohammad Keyhani, associate professor at the Haskayne School of Business at the University of Calgary, this finding, while seemingly rosy on the surface, could be problematic.  

“We're sort of at the cusp at that moment where, right now, most people don't want to just give business decision-making authority to AIs, and there's good reason for that,” he says. 

“But maybe we're on that verge of the moment where we start just delegating things to them, because they seem to be making good decisions. There's a danger in that, because then we don't know who to hold accountable and responsible if something goes wrong.” 

AI is not at teammate level – yet  

The Upwork report outlines how AI tools are evolving into “teammate” roles – a designation which Keyhani stresses is too much, too soon: “Making it seem human when it actually isn't.” 

Rather than labelling AI agents and other tools as teammates, he says, organizations should be focused on the details, such as what decision-making power they give to AI, and who will be responsible for outcomes?  

“The questions we should be asking is ‘How much autonomy are we willing to give AIs, who's going to take responsibility for it?’” Keyhani says, adding that in his classes he tells his students they are all “cyborgs” with expanded capabilities. 

“With your cyborg powers, you have to go explore it, figure out what you can do, and you are the ones ultimately who take responsibility for the AI's work when you bring it to the organization.” 

Source: Upwork Research Institute

In some instances, Keyhani says, organizations are experimenting with “extreme forms of automation” where AI is given decision-making power – for example, he cites recent uses in marketing where AI bots are created to scrape LinkedIn profiles and then write and send emails to potential clients based on their data. 

“Those emails are being automated with the human sender not even having read the emails, so they don't even know what they're offering. They're just trusting the AIs to send good emails,” Keyhani says, emphasizing that for him, current AI technology is not dependable enough to make business decisions. 

“If you let an AI make that decision and act independently on its own without a human taking responsibility for it, that could be dangerous,” he says. 

“You can easily have a human go through the draft emails before clicking ‘send.’ But if you allow the AI to just click ‘send’ on its own, then I think you're starting to cross a red line.” 

AI ‘sycophancy’ and employee wellbeing 

Upwork reports that in addition to being burned out at higher levels: 

  • 85 per cent of top AI users say they are nicer to AI than to their human coworkers 

  • 67 per cent express greater trust in AI than in their colleagues 

  • 64 per cent report having a stronger relationship with AI than with their human teammates. 

Nicholas Vincent, assistant professor of computing science at Simon Fraser University, agrees with Keyhani that AI workplace tools are not at the level of “teammate” just yet, but also adds that treating them as such can have detrimental and long-term effects on employee wellbeing.  

This is due to a problem being referred to by researchers as “AI sycophancy”. 

“AI systems, the way that they're trained right now, are really good at saying what you want to hear,” Vincent explains. 

“Really good mentors oftentimes have a way of flagging your bad ideas, and doing it in a kind, thoughtful way. In theory, you could imagine AI systems getting trained or set up or prompted specifically to do this, but the current default is leaning more towards telling you what you want to hear.” 

Keyhani emphasizes that correlation does not always equal causation, meaning that although a high percentage of AI top users report burnout, that does not necessarily mean the AI use is what’s causing it.  

The cause is likely much more complex than that, he says. 

“People who work really hard also are the people who are trying to figure out how to use AI to get even more done, to leverage their capabilities,” says Keyhani.  

“It's not surprising to me that the people who are just burning out because they work a lot are also the people trying to use AI. It may actually be a reverse causation, where the people who are overloaded are the ones that are the most motivated to figure out how to use AI in their work.” 

Transparency about employee data and enterprise AI 

Vincent flags an issue that, while not a pressing concern yet, is something employers and HR should be mindful of to prevent legal risks down the line: when mandating enterprise-wide AI systems, where is employee data being stored, shared, and used to train models? 

For the first time, he explains, workers using these systems are creating detailed logs and tracking information of not only their work output but things like their routines, processes and habits.  

“Now you have all these fine-grained details,” of employee data being created and sent through AI systems, Vincent says, “but then the more concerning thing is that those kind of logs are exactly the data that your boss would want if they wanted to train a model specifically to replace you. I think this is really, really concerning, that there's almost no way to use AI throughout your workday without creating this trace data.” 

Most employees using enterprise AI systems aren’t aware of how their organization is collecting and storing the data, who owns it, and if it is being used to train the system, Vincent says, and warns employers that waiting to communicate clearly about this could lead to pushback and even legal risk in the future, if employees feel their privacy has been breached. 

“There's a really tough balance to be struck here, if you're leadership in a company,” he says.  

“You have this pressure on you to compete and to be doing whatever it takes to maximize productivity within your organization … [try] to give workers more reassurance, a lot of transparency: ‘Here is what's going to happen to that data. Here's what's going to happen to those logs of what you did all day.’ Are the people at Microsoft or Google going to be able to see it? Is your direct boss going to be able to see it? Three levels up the org chart, can they see it too?'” 

“I think that's going to be a really important issue for the workplace.” 

Way forward for implementing AI at work 

Agreeing with Upwork’s recommendations, Keyhani suggests employers focus their attention on proactively strategizing their AI gameplan, including uses, oversight and privacy concerns.  

And, he reminds employers that referring to current AI tools as “teammates” is reductive and even misleading. 

“Calling an AI a teammate has another danger, of implying that there's nobody else who is taking responsibility for it,” he says. 

“You know, it's just part of the team. It has its own thing, and its opinions are as valid as anyone else on the team. My opinion is that even if that AI is smarter and more capable than everyone on the team, someone ultimately is responsible for what we accept from it and what it does.” 

For now, he says, the closest organizations should be using AI as teammates could be as low-level decision-makers. 

“For example, when multiple employees disagree with each other on something and that something is within the capabilities of AI to figure out and settle that situation, I could imagine someone going to an AI and saying, ‘In this debate, which one do you agree with and why?’” Keyhani says.  

“But even then, whoever is asking the AI that has to be responsible for taking ownership and responsibility of the output. We are not at the point in most organizations, where we just let an AI decide on those things and then go with it.” 

Latest stories