
It’s not easy to find and recruit qualified IT workers these days, let alone meet them. Software developers and other tech-related professionals are not only in high demand globally, but also many of them are used to working remotely. It’s not uncommon for companies to interview, hire, and put them to work without ever meeting them in person. All of that could transpire through just phone conversations or Zoom interviews, where the camera may or may not be turned on.
Add to that the growing availability of AI-powered tools designed to help cybercriminals and other malicious actors masquerade as legitimate candidates, and it’s getting increasingly tough for hiring managers to make sure that the people they’re looking to hire are actually real people. “Interviewing has always been hard, but AI has made it so much harder,” said Kyle Hankins, CTO and co-founder of the cybersecurity startup Bytewhisper Security, which has a completely remote workforce.
Many of the challenges related to hiring remote workers date back to the pandemic. A 30-minute or an hour call just isn’t enough time to figure out if an applicant can do the work, and it gives a relatively shallow view of who they are as a person, assuming they are a real person, Hankins says.
It became abundantly clear just how bad this problem had gotten a couple years ago when it was discovered that North Korean government operatives were posing as non-North Koreans and attempting to get hired into high-paying remote IT worker jobs to raise money for the North Korean regime.
Google Threat Intelligence Group, along with other researchers, has tracked the activity back to at least 2022. Typically, they say, the North Koreans would create fake workers with fabricated names, resumes, and even personalities in attempts to get them hired at major companies in a variety of industries.
They would also pay non-North Korean people, known as “facilitators,” to do things like launder money and cryptocurrency, receive and monitor company laptops at their homes, or stand in for the North Koreans during video interviews to make it look like someone else was applying.
Some cybersecurity companies and researchers have also reported instances of real-time deepfake videos being used as part of the fraud, where AI technology would transform the North Korean applying for the job into a completely different-looking person.
While the North Koreans’ initial goal was to raise money for their regime, the scope of the operations has expanded in recent months, says John Hultquist, chief analyst for Google Threat Intelligence.
North Korea is now targeting companies outside of the U.S., in places like Europe, and they’re looking to do more than just earn paychecks. They’re also using their privileged access to corporate systems to steal data and launch cyberattacks.
That has executives like Hankins scared. Getting tricked into hiring a North Korean wouldn’t be a good look for a cybersecurity company, but inadvertently giving one access to private company data would be disastrous.
Hankins, who is based in Denver, Colorado, says it’s for those reasons that he usually hires from within a network of people that he or others at Bytewisper have worked with in the past.
“But if I needed to test a complete stranger right now, I would literally fly them into Denver just because of the potential consequences,” he said.
AI makes a bad problem worse
AI has proven to be a big boost for the fake job applicant scammers.
“They’re very, very comfortable with AI tools as part of their process,” Hultquist notes, adding that AI makes it easier for them to do everything from create fraudulent identity documents to resumes and personas, as well as conduct research on their targets.
Most importantly, AI lets them do all of that at a massive scale that they haven’t been previously capable of, he adds.
And while Hultquist says he’s only so far seen anecdotal evidence of deepfake videos being used during video call job interviews, others in the cybersecurity industry say they pose a growing danger to hiring companies as they become more sophisticated and believable.
Vijay Balasubramaniyan, CEO of Pindrop Security Solutions, which specializes in deepfake detection technology, says he’s seen these kinds of deepfakes first hand, specifically in Pindrop’s own hiring process. The company stumbled upon its first deepfaked job candidate back in February, when someone Pindrop refers to as “Ivan X” applied for a remote engineering job at the company.Ivan was one of over 800 people to apply for the position through LinkedIn and looked good on paper, so ‘he’ was invited to interview via Zoom.
But that’s when things got weird. Sometimes Ivan’s facial moments didn’t match exactly what he was saying, and there were lags between the video and audio parts of the interview, Pindrop’s researchers noted. Ivan also didn’t respond well when asked unexpected technical questions, pausing for extended periods of time as if he was processing information for playback.
All of Pindrop’s interviews are monitored by the company’s security assistant, a real-time deepfake detection bot that joins the Zoom meetings to verify authenticity. It flagged Ivan X’s interview for using a potential face swap, meaning that the person interviewing was using deepfake technology to change the appearance of their face to someone else’s. Further analysis revealed that Ivan X’s IP address was tied to North Korea.
The incident prompted Pindrop to take a closer look at its own hiring pipeline, Balasubramaniyan says. So, a couple of months later, the company posted a position for a mid-level software engineering role. Within the first five days, it received 541 applications, which is about what it expected.
This time, company officials reviewed each of those applications by hand and compared them to a set of criteria they had developed for spotting potentially fraudulent applications based on what they had learned from the Ivan X incident. Some of those red flags included resumes listing shell companies and LinkedIn profiles created recently with few connections.
They were shocked to find that 101, or 19%, of the 541 applications for that job showed signs of potential fraud. The company also re-examined the pool of more than 800 applications for the job Ivan X applied for and found that about 20% of those were also potentially fraudulent.
Balasubramaniyan says he started talking with Pindrop customers who reported that they had inadvertently hired one or more remote IT workers who turned out to be North Korean. In one case, he says, a company told him that they had unintentionally hired the same North Korean for three different IT jobs. The worker even collected bonuses for helping the second and third fake workers get hired.
“It’s crazy,” Balasubramaniyan said, referring to the “rabbit hole” of countless investigations his team has gone down over the past several months. “The stories that we’re hearing from customers are just insane, but that’s the point. Right?”
Fighting back
That’s a lot for companies to deal with, especially if they’ve got hundreds of job candidates to vet. But Google’s Hultquist says many companies are now wise to threat and are better vetting their job candidates.
In addition, U.S. law enforcement has started cracking down on the non-North Korean facilitators that allow the schemes to happen, announcing arrests and indictments. They’ve also shut down laptop farms and seized allegedly related financial accounts.
The best thing companies can do is make a point of getting a look at all of their candidates, Hultquist says. In many cases, the fake workers were hired because no one at the company had actually seen them during the hiring process.
While the simple way to do that is by video, it’s always better to meet them face to face. That also would allow for greater scrutiny of identity documents, Hultquist says. While it’s easy to use AI to alter the face on a driver’s licence when it’s being sent to HR as a screenshot, that’s not the case when people have to present them in person.
And verifying an IT worker’s identity needs to be a top priority. “You’re hiring somebody with access to your IT stack or developing your software,” Hultquist said. “That’s an inherently risky job, because you’re potentially giving somebody an opportunity to cause real damage to your organisation.”
He noted that the North Koreans’ game plan has evolved to involve more than just working for paychecks to fund their regime. They’re also looking for other ways to monetise their access to company systems and could look to steal or expose sensitive company data for a profit.
Balasubramaniyan says that while it would be nice to fly people to Pindrop’s headquarters and interview them in person, that’s just not realistic when you’re talking about potentially hundreds of job candidates. In addition, he says his company contracts with workers in India. Verifying all of those workers’ identities would require Pindrop officials to travel there constantly.
That’s why he says he thinks the solution lies in technology, though as the head of a deepfake detection company, he admits he’s probably biased.
Balasubramaniyan noted that years ago when credit card fraud became a problem, companies responded by boosting encryption and security. And when spam started bogging down email, people created filters to keep it out.
“So you always are going to need technology,” he said. ‘Because in a lot of these cases, humans are really, really bad at detecting any of these things.”