But expert warns of risks to proprietary information
Using ChatGPT in the workplace could create a risk of proprietary information, according to one expert.
Human reviewers from other companies may read any of the generated chats, and researchers found that similar artificial intelligence (AI) tools could reproduce data it absorbed during training, says Ben King, VP of customer trust at corporate security firm Okta, in a Reuters report.
"People do not understand how the data is used when they use generative AI services," he says.
"For businesses this is critical, because users don't have a contract with many AIs - because they are a free service - so corporates won't have run the risk through their usual assessment process.”
ChatGPT used by 1 in 4 workers: survey
This comes at a time when many workers are using ChatGPT at work behind their employer’s back.
Specifically, some 28 per cent of workers regularly use use ChatGPT at work, even though only 22 per cent say their employers explicitly allow such external tools, according to a Reuters/Ipsos poll of 2,625 adults conducted between July 11 and 17, 2023.
Also, some 10 per cent of surveyed workers say their bosses explicitly banned external AI tools, while about 25 per cent do not know if their company permitted use of the technology.
The use of ChatGPT has soared. Between January and February, the use of OpenAI’s generative AI technology soared by 120% globally, according to a report from DeskTime, provider of workforce management solutions.
And generative AI tools have become a regular part of tech HR employees’ work lives. More than one in five (21 per cent) are using the tool for training and development, according to a previous B2B Reviews report. But 69 per cent of employees would feel uncomfortable to some degree if their company used AI tools when making layoff decisions, according to another report.
How companies are being safe with ChatGPT
Some companies that have embraced the use of ChatGPT and other AI tools are taking steps to ensure that it is safe to use.
"We've started testing and learning about how AI can enhance operational effectiveness," says a Coca-Cola spokesperson in Atlanta, Georgia, in the Reuters report, adding that data stays within its firewall.
"Internally, we recently launched our enterprise version of Coca-Cola ChatGPT for productivity." Coca-Cola also plans to use AI to improve the effectiveness and productivity of its teams, notes the spokesperson in the report.
Dawn Allen, CFO at food and beverage supplier Tate & Lyle, meanwhile is testing out ChatGPT, having "found a way to use it in a safe way".
"We've got different teams deciding how they want to use it through a series of experiments,” Allen says in the report. “Should we use it in investor relations? Should we use it in knowledge management? How can we use it to carry out tasks more efficiently?"
Geoffrey Hinton, the former Google executive dubbed as “the godfather of AI”, previously said that advancements around AI technology are pushing the world into “a period of huge uncertainty”.
How do you integrate AI into work?
Here are some tactical tips for safely integrating generative AI in business applications to drive business results, as detailed in a Harvard Business Review article:
- Train generative AI tools using zero-party data – data that customers share proactively – and first-party data, which you collect directly.
- Review all datasets and documents that will be used to train models, and remove biased, toxic, and false elements.
- Ensure there’s a human in the loop in the use of AI.
- Collect metadata on AI systems and develop standard mitigations for specific risks, ensuring human involvement in the process.
- Have open lines of communication with community stakeholders to avoid unintended consequences of AI use.
“Generative AI is evolving quickly, so the concrete steps businesses need to take will evolve over time. But sticking to a firm ethical framework can help organizations navigate this period of rapid transformation,” they say.