Evaluating the Risks of Deepfakes in UAE Organisations

Evaluating the Risks of Deepfakes in UAE Organisations

According to a survey by Kaspersky, 75% of the employees in the UAE believe that their company can lose money because of deepfakes.

As AI-powered technologies are advancing and rapidly transforming, organisations are becoming vulnerable to deepfake-related scams, losing their sensitive data and monetary funds. Cybercriminals are leveraging the capabilities of generative AI imagery for a plethora of fraudulent activities.

For instance, malicious actors can create a fake video of a CEO requesting a wire transfer or authorising a payment. As a consequence, it will enable these cybercriminals to steal their corporate funds, negatively impacting the organisation’s reputation. 

To evaluate the ever-growing cyber vulnerabilities related to deepfake images and videos, the researchers at Kaspersky interviewed employees in the UAE to understand their concerns. 

  • 75% of the employees in the UAE believe that their company can lose money because of deepfakes. 
  • 37% of the respondents said that they can identify a deep fake image by differentiating it from the real one.  

How easy can it be to differentiate deepfakes from the real images?

Emphasising the concerns related to deepfake-driven thefts,  Dmitry Anikin, Senior Data Scientist at Kaspersky, said,  “Even though many employees claimed that they could spot a deep fake, our research showed that only half of them could actually do it. It is quite common for users to overestimate their digital skills; for organisations, this means vulnerabilities in their human firewall and potential cyber risks – to infrastructure, funds, and products. 

What initiatives should organisations take to prevent deepfake-related thefts?

After highlighting the malicious incidents that took place due to deep fake images and videos, Dmitry added, “Continuous monitoring of the Dark web resources provides valuable insights into the deepfake industry, allowing researchers to track the latest trends and activities of threat actors in this space. This monitoring is a critical component of deepfake research which helps to improve our understanding of the evolving threat landscape. Kaspersky’s Digital Footprint Intelligence service includes such monitoring to help its customers stay ahead of the curve when it comes to deepfake-related threats.”

Enlisted below are a few defence mechanisms against deepfakes that Kaspersky recommended: 

  • Check the effectiveness of the cybersecurity practices in the organisation, not only in the form of software, but also in the form of developed IT skills. 
  • Boost the corporate “human firewall,” ensuring the employees understand what deepfakes are, how they work, and the challenges they can pose. Have ongoing awareness and education drives on teaching employees how to spot a deepfake. 
  • Use good quality news sources information illiteracy remains a crucial enabler for the proliferation of deepfakes.
  • Have good protocols like ‘trust but verify, to incorporate a sceptical attitude to voicemail and videos to avoid many of the most common traps.
  • Be aware of the key characteristics of deepfake videos to look out for to avoid becoming a victim: jerky movement, shifts in lighting from one frame to the next, shifts in skin tone, strange blinking or no blinking at all, lips poorly synched with speech, digital artefacts on the image, video intentionally encoded down in quality and has poor lighting.

Summing it up, With the emerging deepfake vulnerabilities, it is essential for organisations to implement best practices to avoid monetary losses. Training software, and incorporating the practices of ‘Trust but verify’ are a few things to prevent such AI-imagery-related cyber thefts.