Risky business: Survey shows ChatGPT often used by HR

But employment lawyer warns of legal, accuracy issues with generative AI

Risky business: Survey shows ChatGPT often used by HR

Generative AI tools have become a regular part of tech HR employees’ work lives, according to a new survey.

More than one in five (21%) are using the tool for training and development, finds B2B Reviews.

ChatGPT is also being used for:

  • Employee surveys (20%)
  • Performance reviews (20%)
  • Recruiting (17%)
  • Employee relations (15%)
  • Compliance (13%)
  • Performance management (12%)
  • Administration (10%)
  • Onboarding (9%)
  • Employee warnings (5%)

And HR tech employees say they save roughly 70 minutes per week using ChatGPT, finds the survey of 213 HR professionals working in tech and 792 tech employees.

Privacy, accuracy concerns of ChatGPT

But can this practice be trusted, and what are some of the main legal risks for employers and HR professionals?

“It’s one thing to submit stuff that doesn’t have any personal information but the minute you input anything into the tool that has any kind of confidential or personal information in it, that tool keeps that — that’s a huge privacy problem,” says David Canton, a lawyer who leads the technology and privacy law and intellectual property practice at Harrison Pensa in London, Ont.

“There’s several privacy commissioners around the world looking into that issue. The Canadian Privacy Commissioner, along with a couple of provincial counterparts, are actually having an investigation into the ChatGPT for that very thing.”

Ontario’s privacy commissioner recently called for a “robust framework” to be put in place before government services widely deploy AI. 

While AI has been used for years to search out candidates on social media and Google, “if you’re going to use a chat tool as a search tool, that’s really risky because it’s infamously inaccurate,” says Canton.

“If you’re in the HR department, if people are using them to help them generate correspondence, that’s certainly a good thing to do if it saves them time. The only thing you have to watch out for is the AI tends to hallucinate the result. It’s not a set-and-forget thing, and you just have to be careful about what it’s putting out and making sure that it makes sense.”

Bias of AI tools

It’s important to double-check AI’s work when it comes to recruiting, and it should never be used to make a final decision on hiring, according to Canton.

“The other place where HR departments have to be really careful if they’re using any kind of AI-based algorithms or AI-based tool to make decisions in the hiring process. There’s huge problems with embedded bias and making decisions based on wrong reasons.”

Earlier this year, Workday, an HR and finance management software company was sued in California, after a Black candidate was denied employment despite being highly qualified after applying for 80 to 100 positions through the software since 2018.

Cases like this should give HR professionals pause when relying on AI for these types of decisions, says Canton.

“There’s a ton of AI ethical standards around the world, and a common thrust of that is how do you deal with embedded bias and algorithmic transparency? How do you know why it’s making these decisions? And is it making decisions for the right reason?

“So that’s where the greater danger is for HR is if you want you to start using it for decision-making, you really have to think that stuff through.”

Reputation risk to using generative AI

Besides these potentially damaging consequences, there is a harm for “reputational risk,” he says.

“There’s nothing wrong with using the tools per se but if there’s a perception out there in the marketplace that you’re making wrong or weird or bad decision, because you’re just giving it all up to the AI, then that’s not going do your reputation any good,” says Canton.

This manifested recently when a lawyer was found to have filed documents with a court that had some major downsides.

“It turns out that the lawyer had used generative AI to create the brief, hadn’t paid attention and just filed and the AI had completely fabricated six cases that didn’t exist, and even fabricated quotes from those fabricated cases. So, clearly, he got into huge trouble over it,” he says

For HR, it’s time to consider preparing separate policies on the use of AI in the workplace, says Canton “because within almost any company, somebody — if they haven’t already — they soon will use these tools to do something.”

“It may be innocuous but it can get you in trouble for various reasons, and a policy is needed that basically says, ‘Before you start using these tools, you’ve got to think about these questions and if you’re going to use it to generate anything proprietary in any way, shape or form, or it deals with confidential and personal information, you’ve got to figure these things out,” says Canton.

Latest stories