With laid-off staff urged to use AI for 'emotional and cognitive load,' experts cite risks, rewards of monitoring, supporting employees
As Microsoft prepares to lay off 9,000 employees globally — including at its Dublin base — a senior executive’s advice to those impacted has sparked conversation.
On LinkedIn, Matt Turnbull from Microsoft’s Xbox division, suggested laid-off workers use AI tools like ChatGPT to ease the “emotional and cognitive load that comes with job loss.”
He later deleted the post, but the debate lingers: Can AI really support employees through such personal, often traumatic transitions?
AI expert Serena Huang cites the irony of the Microsoft gesture: “We were laying you off because of AI, and now you can go ask AI for some help.”
But she also has concerns about general, foundational AI tools like ChatGPT, Gemeni or Claude being used for this purpose. For people with mental health challenges that require clinical intervention, tools such as generative AI will not be there to intervene.
“They cannot make sure that you are doing what you need to take care of yourself. It can only, using text, tell you what a recommendation is… technology wise, we're not quite there yet,” says Huang, founder of Data with Serena.
“It's really important for HR leaders, business leaders to make sure they are protecting their employees and not put the wrong tools in their hands.”
AI alone isn't enough for mental health
Drawing from informal experiments that she conducted, Huang cites stark inconsistencies in how various AI tools respond to mental health crises. Using a scenario of possible self-harm, some tools encouraged her to call a hotline right away, while others reassured her she was doing OK and just needed to be positive.
As she notes, these tools are programmed in a way to agree with the user, that’s the default, which can be “scary” when it comes to mental health.
“It tends to say nice things, and unless you tell it not to, it's not going to stop you. It will be very encouraging, even if you're having some negative thoughts,” says Huang, citing as an example a chatbot that encouraged a person with anorexia to eat less.
Plus the context is missing, as a chat tool is only text-based, missing out on important facial expressions or verbal ones that a therapist or friends or doctor can notice, she says.
“There's a lot of a lot of ways it can go wrong, even if it's designed specifically for that purpose.”
Huang adds that some patients are already using AI as a companion between therapy sessions, then discussing insights with their clinicians. This kind of human-AI synergy, she says, is the ideal approach.
“They’re having that exchange, and then they’re bringing it back to the human therapist … to continue the therapy. I think that’s an amazing… very creative use of AI for mental health but definitely not for a crisis at this moment.”
AI sentiment analysis: pros and cons
Huang is also measured on AI’s sentiment capabilities, in analyzing data to determine the emotional tone, classifying it as positive, negative, or neutral, for areas such as customer feedback or employee engagement – or mental health.
Such tools often struggle with cultural nuances and context, she says.
“In cultures where people tend to downplay emotions, where they are maybe less direct in a communication, this can be very problematic,” says Huang. “Americans are a little bit more direct than Canadians, for example, and that can come across as rude.”
We just can’t expect AI to be 100% accurate, she says.
“There's just too much nuance to assume AI can do it all. I think, definitely, AI can take a first pass, and then human should validate whether or not they agree with the AI conclusions before doing anything.”
However, there is value in monitoring trends on an ongoing basis, says Huang. For example, in a group setting, if the sentiment suddenly changes from positive to very negative, or negative to neutral.
“You could do a simple analytics, like the percentage of comments that mentioned anxiety or burnout, for example… and this is a metric you're monitoring across groups. Suddenly, this one city, this one office, is showing much higher anxiety than the week before — that could be a signal to call the office leader and see what's going on locally.”
Using AI for productivity
Having advised Microsoft, Facebook and Citibank on early AI hiring systems, Vivienne Ming, chief scientist and board director at Dionysus Digital Health, recalls mixed motivations behind early use cases.
“Some of the things people asked for were pretty dark: ‘Can you help us identify people that are likely to make workman's comp claims or are likely to unionize so that we can essentially keep them out of our employee base? Can you use AI to essentially get people to choose to leave when you feel it's time for them to go, rather than having to actively let them go?’” she says.
Now, Ming urges realistic expectations. AI performs well with common patterns — less so with outliers, she says.
“Employees that are more unusual, more non‑neurotypical, employees that are kind of outliers, it’s going to do less and less well in modeling them and making accurate recommendations.”
Often top performers, they may fall through algorithmic cracks, she says, so the tools must augment judgment, not replace it.
“If you're a manager, a hiring manager, an HR manager, or just a direct manager, then a part of your job in an AI-rich world, an AI-rich company, is knowing where the tools will work and where they won't, and where you need to take over,” says Ming.
“Until companies really do some training to get these managers ready for that kind of decision making, then you're inevitably going to have risks that the AI will just treat everyone as an average person, making solid recommendations for most people, but terrible recommendations for some. “
Cultural and gender bias in AI systems
And, Ming cautions that AI outputs will reflect the biases of their training data — including gendered assumptions about roles and behaviours.
But there are ways of getting around it, she says, “by being very thoughtful about how you cue these systems to think differently about different populations. But again, that takes training. It takes some experience working with them,” she says.
“If you just kind of turn it loose, and a person new to this says, ‘Hey, write an evaluation of all of my employees,’ it will reflect very much gender bias; it'll have a lot of cultural biases built into it.”
But Ming highlights a positive counter-example — repurposing AI to connect employees with internal opportunities:
“What if we built a tool that found new opportunities inside the company?... How do we identify new projects that people could be working on, new connections or teams, mixtures of employees that could actually make them more productive?”
Such redeployment efforts can benefit from AI's ability to detect team fit and skill alignment, she says: “The idea is, if they were on the wrong team, then their ideas aren't getting heard, the personality matches aren't there. But if you use the AI to find a better match inside the company, then suddenly you recover all of this productivity in this employee that you're otherwise missing.”
Providing transparency in age of AI
Even when tools are secure, employees may avoid them out of privacy concerns, Huang warns. As a result, she recommends full transparency from HR – especially if the employer wants to avoid unwanted social media attention if the monitoring is revealed by a disgruntled employee.
“Everyone just wants to know: ‘What does my manager see? What does HR see at the end of the day?’ That’s it.”
If, for example, you’re doing an aggregate of overall sentiment by group, you could explain: “We care about your mental health, we're now monitoring things like the percentage of comments that may point to some level of anxiety or burnout, or depression, self-harm, these are things that we're monitoring, and we are not looking at individuals, but we're looking at trends.”
It's very much the responsibility of an employer to let people know how that information will be used and transmitted, agrees Ming.
But there should also be transparency about who is handling the data, such as a third-party vendor.
“Whether you're buying a big foundation model access from open AI or Microsoft, or you're buying even a more specialized tool from a small organization, then that almost certainly means your data is being transmitted outside the company and being processed by their models.”