Home Artificial Intelligence The cyber pandemic: AI deepfakes and the future of security and identity verification

The cyber pandemic: AI deepfakes and the future of security and identity verification

Injection attacks are now five times more common than presentation attacks, and when used in combination with AI-generated deepfakes, they’re nearly impossible to detect. Attackers use deepfake ID documents to fool KYC processes or inject deepfake photos and videos to spoof facial biometrics systems. A prime example is the recent attack that injected an AI deepfake video feed to defraud a Hong Kong company for $25 million. As expected with the rise of Generative AI, AI deepfakes are also on the rise, with Onfido reporting a 3,000% increase in deepfake attacks last year. The NSA, FBI, and CISA collaboratively shared their concerns about the threat of AI deepfakes, saying that, “The increasing availability and efficiency of synthetic media techniques available to less capable malicious cyber actors indicate these types of techniques will likely increase in frequency and sophistication.” 

The key to stopping injection attacks is to prevent digitally altered images or documents from being introduced in the first place. And the only way to do this is to leverage advanced security technologies such as mobile cryptography. The cryptographic signatures provided by mobile devices, operating systems, and apps are practically impossible to spoof because they’re backed by the extremely high-security practices of Apple and Android. Using mobile cryptography to determine the authenticity of the device, its operating system, and the app it’s running is a crucial and decisive measure for stopping injection attacks in their tracks.

The presentation attack

Presentation attacks present fake data to a sensor or document scanner with the intent to impersonate an end user and fool a system into granting access. Facial biometrics presentation attacks take many forms, using deepfake ID documents, “face-swaps,” and even hyper-realistic masks to impersonate someone. IDV and KYC platforms use presentation attack detection (PAD) to verify the documents and selfies that are presented, but many PAD techniques can be beaten by injection attacks that leverage AI deepfakes. 

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment