Gartner’s survey predicted that 30% of enterprises will consider identity verification and authentication solutions unreliable as a prevention mechanism against AI Deepfakes by 2026.
AI Deepfakes or synthetic images have disrupted the entire digital ecosystem, inviting monetary and reputation losses across organisations. Malicious actors are using innovative tactics to generative deepfake content where old-school-verification methods like biometric authentication systems aren’t enough.
For instance, Gartner’s survey predicted that 30% of enterprises will consider identity verification and authentication Solutions unreliable as a prevention mechanism against AI Deepfakes by 2026. Here is an overview of why traditional methods to avoid ever-growing deepfake vulnerabilities are risky.
Why are AI-generated Deepfakes a matter of concern?
With organisations relying on presentation attack detection detection (PAD) for identity verification, deepfake attacks are getting more exposure. Showing concern about the emerging deepfake attacks, Akif Khan, VP Analyst at Gartner, expressed, “In the past decade, several inflection points in fields of AI have occurred that allow for the creation of synthetic images.
“These artificially generated images of real people’s faces, known as deepfakes, can be used by malicious actors to undermine biometric authentication or render it inefficient. As a result, organisations may begin to question the reliability of identity verification and authentication solutions, as they will not be able to tell whether the face of the person being verified is a live person or a deepfake,” Akif added.
Also Read: Evaluating the Risks of Deepfakes in UAE Organisations
What strategies do organisations need to implement for mitigating Deepfake threats?
Adding to the concern, Akib expressed, “Current standards and testing processes to define and assess PAD mechanisms do not cover digital injection attacks using the AI-generated deepfakes that can be created today.” He emphasised the importance of a combination of strategies for preventing deepfake vulnerabilities by suggesting the following methods:
- Combine IAD and image inspection tools: As a first step to prevent AI-powered deepfakes beyond traditional face biometrics, chief information security officers (CISOs) and risk management leaders must choose relevant vendors.
- Define a baseline: Organisations should begin defining a minimum baseline of controls by working with vendors who have integrated robust technologies. For instance, they should opt for a solution that combines the capabilities of IAD(Identify Verification and Detection) and Image inspection to monitor, classify and quantify emerging cyber-attacks.
- Integrate additional risk and recognition signals: After defining the strategy and setting the baseline, CISOs and risk management leaders must include additional risk and recognition signals, such as device identification and behavioural analytics, etc. It increases the chances of attack detection on their identity verification processes.
Wrapping it up, AI-driven deepfake attacks will continue to increase as Artificial Intelligence technology evolves. Thus, it becomes the responsibility of the security and risk management leaders to take all the necessary steps to mitigate this risk. Incorporating a robust technology that implements additional measures to prevent account takeover by authenticating genuine human presence is the first step.