Over 1 in 4 Canadian companies ban use of generative AI over privacy concerns

New tool presents 'novel challenges,' requires 'thoughtful governance,' says expert

Over 1 in 4 Canadian companies ban use of generative AI over privacy concerns

Over a quarter (27 per cent) of companies have banned the use of generative artificial intelligence (GenAI), at least temporarily, according to a Cisco report.

Sixty-three percent have established limitations on what data can be entered into GenAI applications, and 61 per cent have limits on which GenAI tools can be used by employees.

That’s because over nine in 10 (92 per cent) of privacy and security professionals see generative AI (GenAI) as fundamentally different, requiring new techniques to manage data and risks.

"Organizations see GenAI as a fundamentally different technology with novel challenges to consider," says Dev Stahlkopf, Cisco chief legal officer. "More than 90 per cent of respondents believe GenAI requires new techniques to manage data and risk. This is where thoughtful governance comes into play. Preserving customer trust depends on it."

Top concerns about generative AI

While 79 per cent are getting very significant or significant value from GenAI, use of the technology could also put companies at risk, according to the report.

User concerns with GenAI include:

  • It could hurt company’s legal rights (IP) (69 per cent)
  • Information entered into these tools could be shared with public or competitors (68 per cent)
  • GenAI could produce wrong results (68 per cent)
  • Use of GenAi is detrimental to humanity (63 per cent)
  • It could replace other employees (61 per cent)
  • It could take over respondents’ job (58 per cent)

And many GenAI users have already entered some sensitive information into the tools, according to survey respondents. This includes: 

  • information about internal processes (62 per cent)
  • non-public information about the company (48 per cent)
  • employee names or information (45 per cent)
  • customer names or information (28 per cent)

Personal information of employees at the Toronto Public Library dating back to 1998 were exposed when the library fell prey to a cyberattack late last year.

And Trend Micro blocked more than 85.6 billion threats globally, consisting of email threats, malicious files and malicious URLs. One thing that’s fueling the increase in cyber attacks is companies’ adoption of AI, according to the report.

Employers have put good money into ensuring data privacy, using US$2.7 million in privacy spending in 2023, according to Cisco’s survey of 2,600 privacy and security professionals across 12 countries.

Many leaders and employees across the world don't think their organizations will implement AI responsibly at work, according to a separate report.

What are the problems with transparency in AI?

Consumers are concerned about AI use involving their data today, and yet 91 per cent of organizations recognize they need to do more to reassure their customers that their data is being used only for intended and legitimate purposes in AI, according to the report.

Overall, 60 per cent of consumers have already lost trust in organizations over their AI practices. Yet, companies are slow to implement changes to assure customers that their data is safe.

“We asked about this in last year’s Benchmark Study, and 92 per cent of respondents said their organizations needed to do more to reassure their customers that their data was being used only for intended and legitimate purposes in AI,” says Robert Waitman, director in Cisco’s Privacy Center of Excellence.

“We asked the same question this year, and the percentage had only dropped one per cent to 91 per cent, indicating that not much progress has been made.” 

To address these issues, companies should do the following, according to Cisco:

  • Provide greater transparency in how your organization applies, manages, and uses personal data as this will go a long way towards building and maintaining customer trust.
  • Establish protections, such as AI ethics management programs, involving humans in the process, and working to remove any biases in the algorithms when using AI for automated decision-making involving customer data.
  • Apply appropriate control mechanisms and educate employees regarding the risks associated with GenAI applications.
  • Consider the costs and consequences of data localization and recognize that local providers may be more expensive and degrade the functionality, privacy, and security of your data when compared to global providers operating at scale.
  • Continue to invest in privacy to realize the significant business and economic benefits for your organization.

Latest stories