'These things certainly have risks that we just don’t understand yet because they’re so new'
Many organizations are using some form of AI to help business processes or for HR-related functions.
And today, many companies are offering wellness checks or mental health support to people by using AI technology.
But what are the upsides and, more importantly, downsides, to employing this type of technology? Canadian HR Reporter heard from two experts about what employers might want to keep in mind when deciding whether or not to offer this to employees.
“When you have Microsoft, and Amazon and Google all saying this is not to be used for health, and then you have people taking the underlying technology and packaging it for mental health, you know that something is not ideal,” says John Torous, director, digital psychiatry division at Beth Israel Deaconess Medical Center and assistant professor of psychiatry at Harvard Medical School in Boston.
“The things that it can do for wellness checks, say respond to: ‘How are you doing?” Those are all certainly useful but that’s not mental health.”
While it may offer some vague benefits, the technology is nowhere near ready to be deployed as a diagnostic tool, says Paul Ralph, professor in the Faculty of Computer Science at Dalhousie University in Halifax.
“It may be possible to take a big database of text written about people who have a particular psychiatric disorder or mental health issue, and teach a computer to recognize from the kinds of things that they write that they might have this issue – it might be possible but that’s not necessarily what the companies are doing,” he says.
Potential for ‘catastrophic’ outcomes
In its current form, AI is not up to the task of diagnosing a specific health problem, says Ralph.
“This will be catastrophic because what a large language model (LLM) does is it essentially predicts the next word or phrase that will come in a series of texts in the most generic way possible. Essentially, what a large language model does is, given some text, predict the text that is most likely to come next.”
As an example, AI is able to write a “generic, templated boring song” due to the language predication but “if you think about the way these models are trained, they just dump all of the text they can possibly find into the model to train it,” says Ralph.
“But all of the text they can possibly find is not the scientific literature on mental health and everything written by qualified psychologists. It’s everything written by everybody.”
Overworked employers may soon see positive results to their own mental health by employing AI to accomplish menial tasks.
AI not ready to provide health care
Due to the technology being in its early stages, benefits providers should avoid offering these types of services, according to Torous.
“We would almost be doing ourselves a disservice if we began to say that is what mental health care is. I do think these technologies have a lot of interesting potential, but I think it would be remarkable if they gained public prominence, and we said, ‘They now know how to do therapy already.’”
Most of the companies offering a mental health or wellness service are being careful in what they promise, he says.
“Any of these companies just won’t, in the fine print… say they’re offering medical or psychiatric help because they’re not and that will get them into a lot of hot water. So managers, benefits people, have to have a critical eye and really look at what they’re buying, and ask: ‘What is known about it? What are potential risks?’”
In effect, most of the content produced by a lot of AI tools is simply “nonsense,” says Ralph.
“Imagine that instead of a psychologist, you had a really fantastic bullshitter, a great con artist, who was pretending to be a psychologist. That’s what the LLM is. It’s a con artist pretending to be a psychologist, and so they will say things that sound plausible but have no basis in fact and that’s what makes it so dangerous.”
Ontario’s privacy commissioner recently called for regulations around how the public sector uses AI.
Chatbot gives harmful advice
Already, a health organization unveiled an AI tool that it thought was going to help, instead turned out to be harmful, says Torous.
“In the United States, we had a case of a chatbot called Tessa that was rolled out by the National Association of Eating Disorders; it had to be taken off the internet because it gave harmful advice to people about eating disorders. These things can certainly have risks that we just don’t understand yet because they’re so new.
“It would be very premature to tell people that’s their mental health benefits based on where we are now today.”
Governments should play a role in regulating and educating the public about these potential harms, says Ralph.
“If you’re talking about using AI as a health intervention, it should be subject to the same stringent evaluations as any other health intervention. If you want to bring a drug to market, it has to go through extensive empirical testing, not just to show that it works but also to investigate the severity and frequency of side effects. There is no such testing of AI e-health interventions and, in fact, psychological e-health interventions don’t get tested to the same degree that, for example, drugs do.”
At this point, there is not enough evidence proving its efficacy, says Torous.
“We’ve seen a lot of feasibility data and of course people can interact with a chatbot and may momentarily feel fine in a moment, but we don’t actually have that kind of high-quality evidence that you would have if there’s a new cancer drug. You would say show me the high-quality evidence, and we have to just make sure that we demand the same standards for technology and mental health because if we don’t, we’re going to end up with products that actually don’t work.”