Setting the rules for AI: why HR should take the lead

As 'AI assistants' enter the workplace, Canadian HR professionals need to implement guidelines

Setting the rules for AI: why HR should take the lead

Goldman Sachs’ recent rollout of its proprietary GS AI assistant highlights the potential benefits of AI adoption for employers — but also underscores the importance of company-wide consistency for its use, including guidelines designed by human resources.

As AI technology rapidly transforms workplaces, Canadian HR leaders face an urgent challenge: how to integrate tools like AI assistants while safeguarding sensitive data and maintaining productivity.

And establishing guardrails and guidelines for employees is essential for successful AI integration, according to Gene Lee, associate professor of information systems and analytics at the Sauder School of Business at UBC.

Without them, organizations are running big risks as AI use in the workplace becomes more common, without consistent regulations.

“You need to worry about your confidential data. You don't want to leak your corporate data, internal data,” Lee says.

“There should be some clear guidelines on what things are allowed in AI … I think HR should be the ones who will provide guidelines, like how exactly they can use it, what can be done, what cannot be done with AI.”

AI as a tool, not a replacement

The rapid rise of AI has sparked concerns about job security, with many workers fearing that these tools will replace human roles. Lee emphasized that AI assistants are best thought of as tools to enhance, not replace, human effort.

As reported by CNBC, Goldman Sachs’ CIO Marco Argenti has expressed his hope that in a few years the AI assistants will, “actually reason more and become more like the way a Goldman employee would think.”

This prospect might seem daunting to some employees who are afraid they are training the tool to take over their own roles, according to CNBC’s report. However, Lee’s outlook is more optimistic, especially when the tools’ limitations are taken into account.

He says AI assistants should be thought of as similar to having an intern or junior employee helping an employee with tasks.

“Many of these tools are just similar to hiring someone to help you out,” Lee says.

“If you hire an intern, and then the intern will do some initial work for you, you need to check if it's correct or not, if it's good quality. You need to make some changes. I think it’s the same thing.”

Confidentiality and security: the biggest ai risk

One of the main concerns for organizations adopting AI is data security.

Public AI tools, such as ChatGPT, present a risk of leaking confidential company information, which is why Lee strongly suggests organizations invest in developing their own proprietary AI LLM models such as the one Goldman Sachs has rolled out.

“Because of the confidentiality issue, consulting firms and investment banks, they couldn’t really allow their employees to use ChatGPT,” he says.

“They have introduced this internal AI tool. It’s safeguarded, so even though they use it, the confidential info will not leak to the cloud.”

For Canadian employers, especially those without the resources to develop custom AI tools, creating clear policies for using public AI models is crucial; Lee warns of the consequences of ambiguity.

 “There are some cases where employees, they were allowed to use it, they put some ‘secret sauce’ to the internet, and they got fired. So, there should be definite guidelines.”

Where AI excels — and where it struggles

AI assistants are particularly effective for repetitive or structured tasks, such as summarizing documents, formatting data, or drafting emails. But when it comes to critical thinking or generating creative ideas, the technology still falls short, Lee says.

Experience plays a key role in leveraging these tools – and since employees use the technology at different levels and for different purposes, uniform internal education on how to properly and safely use the tools is critical, he says.

Lee recommends fulsome – and proactive – training on the technology: “What are the strengths of these AI tools? What are the weaknesses? In what scenario we can rely on them?”

With AI tools like Goldman Sachs’ assistant offering employees the ability to summarize meetings, draft emails, translate documents, and create presentations, the possibilities are exciting. But, as Lee points out, without proper oversight, these tools can also introduce significant risks.

How HR can guide AI adoption

For Canadian HR professionals, creating and enforcing AI use guidelines is more than just a precaution—it’s a responsibility. Lee outlined several best practices for organizations adopting AI:

  • Create clear and detailed training and use guidelines for employees, tailored to the needs of that particular organization. Guidelines outline the strengths and weaknesses of particular tools, appropriate uses, and any ethical or security concerns.
  • Allow employees to explore various tools to find the best fit for their uses. HR professionals can play a key role by guiding employees through this process, recommending tools tailored to their roles, and fostering collaboration across departments
  • Don’t rely on just one tool; as Lee explains, different platforms excel in different areas – for instance, Google’s Gemini may outperform others in certain tasks, while tools like ChatGPT or Perplexity.ai are better suited for academic research or summarizing large documents.

Lee emphasizes that while employees should be encouraged to use AI, clear boundaries are essential.

 “Employers should give them the opportunity to explore these tools…but at the same time, there should be some guardrails.”

 

Latest stories