Up to a third of enterprises will stop using deepfake detection methods in isolation by 2026, according to new research from Gartner. Such techniques are often used in onboarding processes that require ID verification using facial biometrics, where a photo or video of the user is compared against an official identity document. However, the increasing scale and sophistication of deepfake attacks designed to undermine techniques designed to assess the liveness of individuals during such ID verification is forcing businesses to consider augmenting techniques that rely on facial biometrics with other processes, like behavioural analytics or device detection.
Deepfake detection is currently a flourishing area of research in AI, with multiple startups dedicated to finding new ways to blunt the adversarial attacks currently used by synthetic image creation software to spoof ID verification systems. However, it seems that few of those enterprises that are aware of the arms race between deepfake detectors and hackers believe that the former are winning, argued Gartner analyst Akif Khan. “Current standards and testing processes to define and assess [presentation attack detection] PAD mechanisms do not cover digital injection attacks using the AI-generated deepfakes that can be created today,” said Khan.
Enterprises increasingly uneasy about the integrity of deepfake detection
Gartner’s predictions about enterprise unease about the capabilities of current deepfake detection applications were based on hundreds of interviews with ID solution vendors and their end users, Khan told Tech Monitor, most of which can be found in the financial services sector. Many of these conversations with IT managers, HR professionals and, increasingly, CISOs in the latter grouping revealed increasing anxiety about the increasing scale and sophistication of deepfake attacks generally and the reliability of current ID verification technology for remote onboarding.
“Other clients who were making buying decisions were then questioning [whether] they should be investing in this technology, at this time,” Khan told Tech Monitor. Concern among vendors, too, about the increasing sophistication of deepfake attacks rose throughout 2023. Khan recalls being shown one example of an attempted fraud involving the use of six different images of individuals of different ethnicities and genders, with the only clue that all were deepfakes being the placement of three or four hairs on the forehead of each.
Informal analysis from Gartner indicates that 15% of fraudulent identity presentations involve attempts to undermine deepfake detection. These are usually so-called “presentation attacks”, wherein a static deepfake image on a screen is offered up for verification by a camera during the ID confirmation process. However, Gartner’s research indicates that “injection attacks,” involving the insertion of a live image within the ID verification application, increased by up to 200% in the first nine months of 2023. Such attacks, says Khan, are “harder to carry out, but also potentially harder to detect, as well.”
Deepfake detection unlikely to be used in isolation in future
Multiple forms of deepfake detection have been proposed in recent years, though most mainstream solutions tend to fall within the categories of active liveness detection, wherein the user is asked to act on a physical prompt like turning their head, or passive liveness detection methods that detect signs of life in a still image like the flow of blood in capillaries close to the surface of the skin. Both have their merits, says Khan, though preventing all deepfake attacks from succeeding will probably require pairing facial verification methods with other methods like device profiling.
“In that instance, you might then detect an attack, but it may not be because you detected the deepfakes,” advises Khan. Rather, he says, it may be “because you’ve detected something else around that which makes you aware of suspicious behaviour,” he says.