Over-reliance on generative AI top ethical concern for workers: Report

'There is a need to evaluate those AI outputs and those AI results,' says expert

Over-reliance on generative AI top ethical concern for workers: Report

As the use of generative artificial intelligence (AI) becomes more commonplace, over-reliance on this technology could pose problems for employers.

In fact, over-reliance (43 per cent) is the top ethical concern that workers cite in a recent study from GetApp, which helps businesses find trusted software.

This ranks higher than privacy and data security (41 per cent), job displacement (36 per cent), misuse of generated AI-content (29 per cent) and the lack of transparency in their data usage (21 per cent) on the list.

This over-reliance happens when “users start accepting incorrect inputs or outputs generated by AI, and then they heavily rely on them,” says Smitri Arya, study analyst with GetApp and Capterra, in talking with Canadian HR Reporter.

This could pose problems to an organization, including a “miscommunication within the business,” she says.

Evaluating results of AI at work

And five per cent of respondents to the survey say that they don’t monitor results generated by AI and they don’t find it necessary to do so.

This can be a huge problem, as less than half of workers also claim their employers require them to do the following, according to GetApp’s survey of 660 workers:

  • 34 per cent of generative AI users have key performance indicators (KPIs) in place to monitor generative AI outputs 
  • 33 per cent of the surveyed users say that a dedicated team evaluates such results 
  • 31 per cent of surveyed users say that they compare human outputs with AI outputs 
  • 15 per cent of respondents say that employees need to review and sign the company policy before using generative AI tools
  • 12 per cent of the respondents say that their company does not monitor these results as of now but they feel they should start doing it

“Considering this data, we can infer that there is actually a need to evaluate those AI outputs and those AI results, and match them with human collaboration,” says Arya. “When AI and humans work together, the situation may seem acceptable. [If not, the results can be] unproven and problematic.”

Some 28 per cent of workers regularly use ChatGPT at work, even though only 22 per cent say their employers explicitly allow such external tools, according to a previous Reuters/Ipsos report. 

And there was a 27 per cent year-over-year increase in the number of cybercriminal activities blocked by Trend Micro, and AI played a role in this increase, according to a previous report.

Preventing over-reliance on AI

A well-defined strategy for the use of generative AI at work should be in place to prevent workers from relying on the technology too much, says Arya.

This strategy should detail “to what extent AI tools should be used at work and how much [workers] should trust AI,” she says.

Employers should also provide generative AI training programs or education programs to let workers “know how AI actually works, how AI algorithms work.” 

This is because generative AI “works on a lot of data sets which can be sourced from anywhere from the Internet,” she says.

“We are not sure about the sourcing of the data. So there should be proper training programs for employees, awareness programs for employees on the usage of AI, and how it works.”

In June, Ontario's Information and Privacy Commissioner, Patricia Kosseim, called on the Ontario government to put in place a “robust framework” to govern the public sector's use of AI technologies.

Regularly evaluating AI performance is also one way employers and managers can ensure that workers do not use generative AI at work too much, says Alexandria Simms, a graduate assistant for Psychology at The Chicago School of Professional Psychology.

“Continuously assess the performance and effectiveness of AI systems,” she says via LinkedIn.

“Encourage individuals to provide feedback on AI-generated outputs and recommendations, highlighting any discrepancies or areas where human judgment might be required. Actively seek input from employees and incorporate their ideas and understandings into the AI system's training and improvement processes.”

Latest stories