'How do you validate — especially when you have remote employees and a global workforce —that a person you're talking to actually is real?'
Imagine hiring a new employee, after several video interviews. They’re personable, knowledgeable and seem like a great match.
But after a few weeks, something doesn’t seem right. When they do appear on camera, their appearance is off. And they don’t always turn on their camera for calls or meetings. Plus, their verbal responses seem slightly delayed.
After investigating further, the truth comes out – you hired a “deepfake,” not a real person – and they have accessed confidential or sensitive information while in your employment.
I have heard firsthand of this situation happening here in Canada, at a large employer, so obviously it should be a concern for HR when it comes to hiring and cybersecurity.
“This is definitely a threat vector that people need to think about,” says Saurabh Shintre, an AI security expert based in San Francisco.
“How do you validate — especially when you have remote employees and a global workforce — how do you authenticate that a person that you're talking to actually is [real] and… even if they're real, are they really the person who they're claiming to be?”
In 2022, the FBI warned of an increase in complaints about the use of deepfakes and stolen personally identifiable information (PII) when it came to applications for remote work and work-at-home positions.
What exactly is ‘deepfake?’
The more technical term for deepfake is often synthetic media, says Siwei Lyu, a SUNY Empire Innovation Professor at the University at Buffalo.
“So it’s audio-visual media created or edited using generative AI technologies. So we call them deepfakes but mostly we refer to them as human face images or human videos of someone and their voices which look realistic, but they're actually not real and created by advanced AI technologies.”
The first-known case of deepfake where people attached a name to it was back in 2017, he says, “so we're talking about only six, seven years, but there have been a lot of advances in six, seven years.”
In North America, the proportion of deepfakes more than doubled from 2022 to Q1 2023. This proportion jumped from 0.2% to 2.6% in the U.S. and from 0.1% to 4.6% in Canada, respectively, according to Sumsub.
A quick search online shows there are apps and websites offering deepfake generators. Essentially, it’s a technology that relies on new machine learning models, says Shintre.
“It's basically saying that if I give the model a lot of samples or something, then I can produce more samples that look exactly like that under different settings.
“What that means is that if I throw a lot of pictures of a person to a model or if I throw a lot of audio clips of that person to the model, and train the model especially on that face and that voice, I am then able to generate more pictures and more voice samples for that individual in different settings — like smiling or angry or saying a particular thing — or convert an existing voice sample, which might be somebody else's voice, and convert it into how it would sound in the target person's voice.”
The reason why it’s called deepfake is because the underlying technology is called a “deep neural network,” and it’s trying to fake or synthesize a voice or a persona, he says.
And while impersonating someone or mimicry has been around for a very long time, what is unique about this technology is that you can do it at scale — and the technology continues to get better every day, says Shintre.
“We can generate these kinds of deepfakes fairly quickly, near real time, and without requiring a lot of samples of that person. So you can just take a few pictures or very small audio sample and be able to synthesize their voice, for example.”
Implications of deepfake for the workplace
But when it comes to the workplace and the impact of deepfake, “there's definitely a concern,” says Shintre.
An “attacker,” for example, may use stolen ID, passwords and a fake profile to be hired so they can steal data or money from an employer, or use their false identity for government benefits, he says.
“It's usually useful for short cons, scams, or some places where there is a pressure of time where a person, on the other side of the phone call, is stressed about something. And then their ability to think through these things critically has been impaired because of some other stressors.”
These deepfakes and other kinds of misinformation campaigns rely on the human brain or human emotions, meaning people’s ability to trust by default, says Shintre, “to hear what we want to hear.”
“It's targeted to people who actually want to believe what the misinformation is trying to say… so these are vulnerabilities in our human minds and our society that combine with technology at scale and cause this problem.”
For example, back in 2019, the CEO of a British energy provider transferred €220,000 to a scammer after he received a call from what sounded like the head of the unit’s German parent company asking him to wire money to a Hungarian supplier, according to Bloomberg.
Also a challenge for employers? People often choose not to turn on their camera during group calls, and people who are new to a company may not have much to contribute in the first few weeks, says Shintre.
“You're just listening and taking [it all] in, so it's kind of acceptable for you to not say a lot and not turn on, necessarily, your camera. So, I can definitely see this angle being played,” he says.
“Even if they succeed to do it for a day, for example, that’s sometimes enough for them to be able to launch a serious attack.”
Combatting the dark side of deepfake through detection, training
Of course, one of the best ways to combat the dark side of deepfake is to conduct in-person interviews. But that’s not always feasible, especially with the huge rise in remote work and a global workforce.
And while background checks are also important, again, that can be more difficult when it comes to employees working overseas, says Shintre.
“In those kinds of cases, you may not necessarily be able to rely on background check companies, because if you're thinking of an offshore in Europe, how do you really make sure that the background check companies are able to get all the information?”
Plus, people using deepfake for nefarious purposes are “reasonably sophisticated with technology, so they would have covered all their tracks,” he says.
There is detection technology meant to root out deepfake, looking for telltale signs such as blank eyes or one eye that doesn’t blink, or a missing earring.
Deepfake is not a perfect tool, says Shintre.
“If you hear the audio effect today, if you listen to it critically, you would be able to tell a difference, it would have some nuances… people would be able to detect it over a long period of time, but… when combined with some kind of haste or some kind of other stressor, that's when your ability to critically think about the situation is already dampened.”
“The current technologies do have limitations,” says Lyu, and with a webcam, the presence of a deepfake can be easier to detect.
Computers have very high-resolution cameras, so it’s easier to detect odd reflections in people’s eyes or irregularities when they move their hands, he says.
“This is one of the limitations of current real time deepfake generation algorithms is they cannot adapt to fast hand movements or... objects moving... like a hand in front of their face — the image will degrade, the quality of the image, the video will be degraded.”
Despite the challenges, employee training can be an important part of any kind of security mechanism, says Shintre.
“Just making people and interviewers aware of ‘Hey, maybe if the employee doesn't turn on the camera in the interview process, that's a sign.’ So just giving people some idea that an attack like this, an issue like this, is possible. And ‘Here are the things that you need to do to keep in mind or detect some fishy behaviour.’”
It’s about being aware, paying more attention to people’s voices and videos — and not making hasty decisions, he says.
“If somebody is trying to tell you something that seems too good to be true, it probably is.”
Most concerning? Deepfake is only going to get better and better, says Shintre.
“Attackers and defenders always play this game of cat and mouse. So when we improve our defences, the attackers improve their attacks.”