The AI Cyber Attack

The-AI-Cyber-Attack

AI is known for its high processing speed and its flexibility, but hackers can now target a broader base with AI Defensive AI is the solution

Imagine the following scenario. A questionable email arrives in an inbox of a C suite executive of a global multinational company. The executive might sooner or later realise its cyberattack nature and presume it to be a normal spear-phishing threat. But it’s an AI-powered spear-phishing email under the command of a malicious AI toolkit. Normal security standards are not going to help.

The AI malware then seeps into the company network by developing a detailed understanding of the executive’s everyday communication and begins to duplicate the same language. The chaos it would cause is catastrophic in a large organisation. Although hypothetical, it is a likely scenario today.

It’s a universal truth that for every good, there is bad. It shouldn’t be surprising that Artificial Intelligence (AI) is also being leveraged by bad actors. An AI attack can complete the reconnaissance, prepare the bug, and send it to its victims while the human attackers sit back and watch their AI minions at work. Offensive AI is real and hackers are a step ahead of cybersecurity officials. 

A 2020 report by Proofpoint revealed that around 78 per cent of organisations claim to be less susceptible to phishing attempts. Although it is good news, it is nowhere near enough for an AI-powered attack. 

A Forrester report revealed that 79 per cent of organisations reported that cyber threats have become faster in the last five years and 86 per cent of organisations stated that the number of advanced threats has also increased. The research also revealed that 44 per cent of organisations take over three hours to discover the threat. Offensive AI has the capacity to move the numbers further away. 

Understanding AI Intruders

In late 2019, an employee of a British energy company received a strange request from his boss over the phone asking him to wire over $240,000 to a random account in Hungary. Although the employee grew suspicious of the request, he did not imagine it would be a threat actor, let alone an AI-driven voice mimicking software. 

While the cybercriminal behind this particular attack is unknown, there are applications such as Lyrebird that could enable convincing duplicity of voices using AI. This is just one of the offensive AI methods leveraged by cybercriminals. The easiest gateway for an offensive threat is through the information provided by the mail exchanger (MX) records. It contains the server and the Secure Email Gateway (SEG) that a company uses. Once into the network, breaking the password cyber wall gets easier too.

Password Generator tool (PassGAN) was introduced in 2017. Some versions of the tool were trained with leaked passwords. Post-training, it could crack over 25 per cent of LinkedIn passwords. With time and increased access to AI technology, the tool has been upgraded to use reinforcement learning that allows automatic learning and adapting during an active cyberattack and improves their password cracking score. 

With such dangerous tools at the disposal of bad actors, it is all the more necessary to let go of regular password settings to a multi-factor or a biometric password system. Researchers from the Stevens Institute of Technology are developing an improved version of their PassGAN project, which they hope is a better secured tool. Apart from traditional passwords, CAPTCHA is another tool that has been targeted by offensive AI. 

With ML or facial recognition, hackers deploy social media bots and derive photos of employees to build a relationship with the target. With better trust, it gets easier for them to hack their way in. 

Meanwhile, an AI malware can evade advanced security measures that are programmed to identify every threat ever recorded. It can spread into the network and unsuspiciously blend with the business activity and scan for vulnerabilities — obscuring its presence by constantly changing its file name and other features. Hackers could even indulge in ‘ML ransomware for hire’ for greater profits.  

The black market in the dark web helps cybercriminals find the most convenient AI-driven tools to make their operations efficient and profitable. AI can learn an organisation’s complex systems and its nitty-gritty, so much so that a traditional cybersecurity system is bound to get defeated. How does a company fight the odds when they are still struggling with regular cyber threats? The most overwhelming concern is identification. How do you find the AI bug when it resembles a normal bug to any cybersecurity system? 

Also Read: Is Identity At The Centre Of The Attack Path?

Fighting Fire with Fire

With the rise of offensive AI, the emergence of defensive AI was inevitable. 

For a defensive strategy, it is important to create a knowledge database for AI to be able to learn what’s authentic to the company and what could be abnormal activity. With such detailed insights, AI will have the capability to take an intelligent autonomous action against malicious AI and neutralise the threat before it gets a chance to peek any further into the network.

ML algorithms can also work effectively. Highly sophisticated ML are assumed to have a feedback mechanism that can be tuned to the underlying models of previous attacks. The models will then begin to leverage more successful strategies to become more efficient and effective.

Additionally, the defensive AI solutions can identify threats using ML and repel them. It can deal with data sifting, fix broken data threads, and require human permission only for major decisions. 

A British-American AI company Darktrace offers an AI technology Darktrace Immune System that can detect, investigate, and respond to sophisticated attacks immediately. The AI solution has the authority to neutralise a fast-moving threat without waiting for human permission. As a result, it provides more time to the security teams to mitigate the risk.

Other available solutions include Crowdstrike’s Falcon platform. It leverages cloud-scale AI to fight AI ransomware and risk. They use anomaly detection for endpoint security. Another cybersecurity company Gatefy offers sophisticated AI and ML that can help companies protect, identify, and block AI-powered malware, phishing or spam e-mails. 

Speed is essential for defensive AI as well. A crypto Trojan is created to stop the anomaly. The infected device is placed in quarantine for a couple of hours and then the AI determines the threshold and the blocking response. It gives enough time for human beings to strategize post-cyber attack measures. A Cybersecurity vendor, SafeGuard Cyber utilises Threat Cortex, an AI-driven engine that detects risks across attack surfaces. During an active attack, the tool helps the organisation lock down the unauthorised data and also return the compromised data into its original state. It also possesses the capacity to scour through the dark web to learn about attackers and risk events.

Research indicates that every two seconds, Autonomous Response technology responds to a threat somewhere in the world. Leveraging AI, Autonomous Response has the capability to create surgical defence responses to offensive AI. Also, defining the basics of an AI-powered cybersecurity system, The British Standard Institution (BSI) has created the world’s first standard, AIC4, a criteria catalogue for AI-driven cloud systems. 

With over 88 per cent of cybersecurity executives anticipating offensive AI to go mainstream soon, AI has to be marked as the first ally for every cybersecurity system. But through it all, trust in information credibility is declining.

Also Read: The AI Cyber Attack

AI Washing in Play?

While the term offensive AI is building a storm worldwide, some experts ask industry leaders to not throw the term around. They claim that the technology used by hackers is strictly ML. The hackers use ML combined with adaptive algorithms and large datasets. Semantics aside, these offensive AI/ML threats are rising in power and require immediate attention. The more data it is fed, the smarter it gets. 

The Cold AI War

Malwarebyte paper, When Artificial Intelligence Goes Awry, the technology is pulling the industry into an unsolicited age of cyberattacks 2.0. When the technology finds a better and more offensive way into company tools and other software solutions, the dark market might witness heavy traffic of threat actors. A Darktrace report reveals that over 44 per cent of executives are looking at AI-powered security and 38 per cent are leaning towards an autonomous response technology. Experts urge cybersecurity leaders, and security analysts to use defensive AI responsibly to end the catastrophic era of offensive AI before it even begins.