The Deep Deception 

The-Deep-Deception

The potential power of deepfake focus on political disinformation, but the malicious AI-based software, cheap and easily available on the dark web, could be bad news for businesses

The lifelike rendering of the politicians, along with thousands of similar deepfakes posted on the internet in the past two years, has alarmed many observers.

Not just governments, even businesses appear to be gravely threatened by deepfakes. It could be weaponised for fraud: In 2019, criminals used artificial intelligence (AI)-based software to impersonate a chief executive’s voice and demand a fraudulent transfer of $243,000. The CEO of a UK-based energy firm thought he was speaking on the phone with his boss, who asked him to send the funds to a Hungarian supplier. This incident used AI-based software to mimic the executive’s voice by phone successfully.

According to cybersecurity group Symantec, in 2019, as many as three companies have been victim to fraudsters using deepfake technology.

While there are no data available on whether there was an uptick in cyberattacks using AI in 2020, hackers are likely to use the technology if it makes attacks more successful or profitable.

It’s not far-fetched that a C-suite executive could be impersonated to spread fake news about their company and damage the company’s performance and share price. Or deepfake videos of senior executives could be the video equivalent of phishing and whaling attacks, which are designed to trick victims into divulging sensitive corporate and personal details or to make direct money transfers to scammers.

Also, deepfake technology could create highly realistic pornographic content with the face of an executive grafted onto the body of an actor, which could then be used as ransomware, or cause her company’s stock price to fall, while the criminals profit from short sales. Or better, fraudsters can create a video of a CEO announcing a false merger or earnings to hike the share price.

Deepfakes are getting harder to spot as the technology improves. The potential power of this technology has focused on political disinformation, but as the algorithms are getting better, becoming cheaper and readily available on the dark web, they are starting to be adopted by hackers in commercial settings. Businesses can’t afford to take deepfakes lightly.

Healthcare and financial institutions are more likely to be the victims of deepfake attacks aimed at stealing data. But experts say any organisation with a vast IT network — like logistics companies or casinos — is especially vulnerable, as well, 

While the strategies to defend against deepfake-fuelled fraud and cybercrime are still evolving, according to the market research firm Forrester, deepfakes will cost businesses over a quarter of a billion dollars.

Deepfake is a forgery created by a neural network, a type of “deep” machine-learning model that analyses video footage until it can algorithmically transpose the “skin” of one human face onto the movements of another. 

Also, hackers used commercial voice-generating software to carry out the attack. Another tactic hackers could use would be to stitch together audio samples to mimic a person’s voice, which would likely require many hours of recordings. Security researchers demonstrated this technique at the Black Hat conference in 2018.

Applying machine-learning technology to spoof voices makes cybercrime easier. The Centre on AI and Robotics at the United Nations Interregional Crime and Justice Research Institute is researching technologies to detect fake videos.

Facebook released its Deepfake Detection Challenge data set to help researchers develop ways to identify deepfakes.

The US Defense Advanced Research Projects Agency (DARPA)’s Media Forensics department awarded nonprofit research group SRI International contracts for research into the best ways to detect deepfakes automatically. Researchers at the University of Albany also received funding from DARPA to study deepfakes. This team found that analysing the blinks in videos could be one way to detect a deepfake from an unaltered video, simply because there are not that many photographs of celebrities blinking.

Most ongoing research aimed at combating the influence of deepfakes focused on automated deepfake detection: using algorithms to discern if a specific image, audio clip, or video has been substantially modified from an original. 

Apart from unnatural blinking patterns, a range of papers have discovered telltale signs of deepfakes, such distortion in facial features, inconsistencies across images within a video, especially concerning lighting, incongruities between the speech and mouth movements of the subject, and even learning to note the absence of biometric patterns specific to world leaders.

According to Symantec, which is researching to map the provenance of video and audio clips to find out how it travelled online, corporate videos, earning calls, media appearances as well as conference keynotes and presentations, would all be useful for fakers looking to build a model of someone’s voice.

Scams using AI are a new challenge for companies. Adapting business processes to account for deepfake threats is crucial. This can be done by requiring two people to sign off on any money transfer request. Also account for deepfakes in incident response plans, and ensure that human resources, public relations, legal and other stakeholders know how to react if a deepfake video goes viral.

Blockchain, a distributed ledger to store data online without the requirement for centralised servers, could be a potential fix. It is tough against a large group of security threats that centralised data stores are vulnerable to.

Also Read: Can Facial Recognition Cross the Privacy Concern Barrier?

Traditional cybersecurity tools designed to keep hackers off corporate networks can’t spot spoofed voices. Cybersecurity companies have recently developed products to detect so-called deepfake recordings. Madrid-based startup Oaro offers companies various tools to authenticate and verify digital identity, compliance, and media. It creates an immutable data trail that allows businesses to authenticate any photo or video. The startup’s mobile application further generates reliable photos and videos embedded with records of user identity, content, timestamp, and global positioning system coordinates.

Dutch startup Sensity provides a visual threat intelligence platform and application programming interface (API) to detect and counter deepfakes, by collecting detailed visual threat intelligence from hundreds of sources across the open and dark web. Its algorithms detect malicious visual media and reveal a comprehensive view of the risks associated with audio-visual content targeting companies, while the AI detectors recognise AI-based media manipulation and synthesis techniques.

Meanwhile, american company ZeroFOX, the provider of advanced AI-powered digital risk protection, has developed video analysis features to detect deepfake videos. In addition to deepfake detection, ZeroFOX is donating Deepstar, a open source toolkit to help research teams and the greater cybersecurity community to tackle the new threat posed by deepfakes and enhance the accuracy and scale at which these detection capabilities must operate.

As of now, there is no silver bullet technology that reliably detects deepfakes. Detection tools are improving, but so are deepfakes themselves. Real solutions will blend technology, institutional changes, and broad public awareness.