Many jobseekers are using the new tools – which raises questions around authenticity and misrepresentation, say experts citing the pros and cons of genAI in hiring
“This is the end of everything.”
That’s how some people might consider the growing use of generative AI tools such as ChatGPT, says David Gerhard, head of computer science at the University of Manitoba in Winnipeg.
But others may have a different take, he says.
“The sentiment around these tools is changing daily, depending on the industry you're in, depending on the kind of use these tools have. There are people who are looking at this and saying, ‘This is the future of everything.’ There are people who are looking at this and saying, ‘This is the end of everything.’”
It’s a dilemma that’s further complicated when it comes to recruitment, as seen with a recent survey that found more than half of jobseekers globally are turning to AI tools to enhance their job applications.
Companies may soon need to decide whether using AI in job applications is acceptable or a dealbreaker, says Gerhard.
"Each corporation needs to ask for themselves: ‘Is using AI to write a cover letter instantly disqualifying? I don't want AI cover letters. I want genuine human cover letters.’ Or is using an AI tool to help augment your application a good thing because you're using modern technology?
“Everybody will have a different take on that.”
AI’s role in resume and cover letter writing
Capterra's 2024 Job Seeker AI Survey, which polled nearly 3,000 jobseekers globally, found that 58% are using artificial intelligence tools in their current job search.
The most common use for AI tools is for writing or refining a resume, cited by 40% of the respondents, followed by finding relevant job openings (37%), writing or refining a cover letter (33%), and conducting mock interviews (31%).
Many recruiters view the use of these tools positively, seeing it as a skill, says Brian Westfall, principal HR analyst at software marketplace Capterra in Austin, TX.
"Last year, 86% told us if they discovered a job applicant had used ChatGPT to help write their cover letter or resume, it would change their opinion of the applicant in a positive way.”
For many, it provides a quick and convenient way to customize applications to specific jobs, says Gerhard, “especially when the number of jobs that people are having to apply to is getting bigger and the expectations that each job application is seeking are getting more different.”
For technical roles, the use of AI might be expected and even encouraged, says Lewis Curley, a partner at KPMG in Canada's People and Change Practice in Toronto.
"If I'm interviewing somebody for a heavily technical role... probably them using generative AI is what I would expect them to do, to augment their work," he says, stressing that organizations need to adjust their approach depending on the nature of the position.
Downsides to genAI: misrepresenting skills
However, more than a quarter of respondents in Canada may not be using AI appropriately in their job search, finds the Capterra survey, as some use it for completing a test assignment or skills assessment (24%), generating interview answers (25%), and applying en masse to jobs (27%).
These are areas where AI use by job candidates “clearly represents an attempt by jobseekers to deceive an employer or disrupt their hiring processes in Canada," says Westfall.
"If AI is being used to hide a skills deficiency, that’s when it becomes a cause for concern.”
Curley has seen candidates using generative AI for face-to-face interviews, where they have a secondary screen, with AI listening into the question and generating a response back.
And the use of AI during psychometric assessments, which are designed to evaluate a candidate’s personality traits, problem-solving abilities or skills, can be a problem with AI tools pre-programmed to generate answers that align with desirable traits, potentially skewing the results, he says.
"If I’m using that tool to fundamentally present myself as somebody different, that’s probably not a great start for a relationship between an employer and an employee."
The use of AI during interviews raises ethical questions as candidates might not be honest about their qualifications, says Gerhard.
"I think at some point we're getting awfully close to the idea of plagiarism, where you're misrepresenting the work of something else for your own work,” he says.
"If the bot is giving you that answer, then the bot would be qualified for the job, not you."
‘It’s about how the tool is used’
However, context matters, says Westfall: “Is ChatGPT being used to augment a person’s existing skill set — or replace it?"
For example, a sales representative using AI to draft a cover letter might not raise any red flags. But for a creative role like a copywriter, where original writing is a key part of the job, AI-generated content could be seen as a negative, he says.
In many cases, the key issue is not whether AI was used, but how it was used, says Curley.
"Some organizations might think, ‘Well, I want a candidate that at least understands generative AI and uses it in an appropriate way to support them, because that's part of the world of work now,’" he says.
AI can be seen as just another tool in a candidate’s toolkit — akin to using Word or PowerPoint to design a visually appealing resume, he says. However, for employers, the focus should be on ensuring candidates use AI in a way that complements their skills rather than concealing a lack of them.
"It's about the appropriate use of that tool... how candidates use generative AI is probably a more important question than ‘Did they use generative AI?’" says Curley.
You want to make sure that your employees understand how AI works, and that it's a new tool that a lot of people are using, says Gerhard, “but you don't want them to be reliant on it, and you want to know when they're using it, so that you understand what parts of the job they're qualified for and what parts of the job they maybe need to build some credentials in.”
Challenges of identifying AI use
Detecting whether a candidate has used AI tools to create application materials or assist during interviews is not always straightforward. There are plagiarism checkers with AI detection features, but they are not foolproof, says Westfall
"These tools aren’t perfect, but they can help flag AI content.”
Gerhard is skeptical about the effectiveness of these checkers.
"It's not easy to identify AI-generated content. There's a lot of companies who will purport to be able to do that, and they'll sell you an expensive tool that will claim to be able to do that, but the reliability of these tools is very much suspect."
Moreover, rejecting a candidate based on suspected AI use could lead to complications, especially if the accusation cannot be definitively proven, he says.
“It’s the same in the academic world, where if I accuse a student of using ChatGPT to cheat on a test, and they say, ‘I didn't,’ and you can't prove it, then we have an investigation on our hands
“So, the first part of that is knowing what these detectors can do and what they can't do, and figuring out how to reasonably accurately identify whether somebody's used it or not.”
Eliminating phone interviews could help reduce the risk of candidates using AI to cheat during the interview process, says Westfall.
"Jobseekers will be more inclined to use AI teleprompter tools to feed them the right answers in interviews if you can’t see them.”
Should AI use be disclosed – or restricted?
The question of whether job applicants should be required to disclose their use of AI in the application process is a tricky one.
"I think you would have a problem with transparency if it was required,” says Westfall.
“There have always been job candidates that lie to get ahead, and that’s not changing anytime soon. Many would still use AI and say they didn’t.”
Given the potential for misuse, he says that it is reasonable for employers to set boundaries around how AI can be used during the hiring process.
"It is completely fair to tell candidates that they can’t use AI to lie about skills or credentials, or to feed them answers to interview questions.”
But enforcing such rules poses a significant challenge, says Westfall.
"Whether employers will actually be able to catch candidates that use AI for these purposes and remove them from job consideration is another challenge entirely."
Curley believes enforcing such a rule would be difficult and might also harm an organization’s image.
"If I’m an organization and I say, ‘You must not use generative AI,’ I’m not necessarily presenting a forward-looking perspective," he says, noting that such restrictions could be perceived as a lack of trust in candidates.
Encouraging transparency about AI use
Instead, Curley suggests open conversations with candidates about AI use, such as: "We’re happy if you’ve used it for your CV, we’ll be asking you questions on it, but face-to-face, we don’t want you to.’"
Gerhard also says that organizations should be transparent about their stance on AI use during the hiring process and how it factors into decision-making.
"Individual corporations should have an idea of how they plan to use these tools. Make that clear to your potential employees, and then have that be part of the conversation: ‘How do you use AI tools? How did you use an AI tool to prepare for this interview?’" he says.
“You get past the use of the tool into the internationality of the use of the tool.”
How interviews will change with genAI
As more jobseekers and companies integrate AI into the recruitment process, HR professionals should adapt, says Gerhard.
“Having real human conversations with people is going to be central to the future of all this kind of stuff, to confirm that the person that you're looking for that purports to have qualification X actually has qualification X.”
It’s also possible that the kinds of questions people ask in interviews will change, he says.
“In the past, we've asked questions that try to get at people's qualifications or understanding or history or expectations or experience, but now we may need to develop different questions that get to deeper knowledge and deeper understanding to differentiate between people who have genuinely done the work and people who have leaned on a chatbot."
Westfall says that recruiters may need to place more importance on references as a means of verifying candidates' skills and qualifications.
"Interviews will still be important to assess professionalism and cultural fit, but I think AI use actually puts greater pressure and importance on references," he says.
"Recruiters need to be more diligent than ever to follow up with trustworthy references who can verify if a candidate has a specific skill or experience.”
As AI becomes more prevalent in recruitment, it could also change the format and focus of interviews, such as more in-person interviews, even for remote roles, as a safeguard against AI misuse, says Curley.
"We’ll see more organizations considering different modalities of interviews and probably bringing back the face-to-face coffee chat as the last step.”
Moreover, the shift toward face-to-face interviews may lead companies to focus more on internal talent development, he says.
"A refocus on internal talent recruiting may well be another way to give you a level of assurance around the candidates that you’re putting into roles," Curley adds, highlighting the advantage of promoting known talent from within the organization rather than relying solely on external hires.