Is AI Shifting The Human-In-The-Loop Model In Cybersecurity?

Is-AI-Shifting-The-Human-In-The-Loop-Model-In-Cybersecurity

AI technology improves prediction accuracy to the point where an increasing number of activities can be fully automated. But experts say human defenders are the heart of security operationsSpecial-Week-Cybersecurity

Did you know more than half of large companies receive 10,000 security alerts daily from the threat monitor software tools? The rising volume of alerts creates problems for operations teams struggling to handle them.

With cyberattacks increasing, Artificial intelligence (AI) is well suited to stem the tide of attacks. It can find patterns in vast amounts of data and analyse and correlate distinct characteristics of data to identify anomalies or potential breaches. This delivers faster and improved security insights, more efficient and automated operations, and human error or oversight reductions.

What’s more, advanced AI tools further calculate risk with each detection and effectively prioritise and triage threats discovered. In some cases, it can quickly drive automated actions to remediate security issues.

“More advanced AI that reduces noise is highly valued. Of thousands of events you may see in a period, less than 10-15 per cent may be actual real attacks or threats worth remediating. With advanced AI, you can easily address the imbalance sooner to cut out the noise with deeper learning and analyse only the surface that presents a risk or denotes an actual attack on the organisation. However, not all AI solutions have reached this level of effectiveness,” says Taj El-khayat, Managing Director for Growth Markets at Vectra AI.

AI and Machine learning (ML) continuously process enormous amounts of data to identify unusual events — an essential function for cybersecurity teams looking for malicious activity. With the volume of activities in IT infrastructure, security teams are challenged to find, for example, abnormal use of a privileged account among thousands or millions of legitimate uses of a privileged account. ML helps to connect the dots and offer up those key events for human teams to decide to act upon or not, providing significant benefits in UEBA, SIEM, network traffic analysis, entitlements management, and configuration management.

“We see ML as a tool that aids our teams. The technology is not mature enough to decide on anything that could impact the organisation, which covers just about every response in cybersecurity. It does an amazing job at is giving our teams more brain space to think about the next step in cybersecurity and to respond more quickly and effectively against attacks as there’s less time spent firefighting the daily small blazes and incidents of concern,” says Brian Chappell, Chief Security Strategist, EMEA & APAC, BeyondTrust.

While AI is good at detection – pointing to issues, detecting new threats, rapidly sharing threat intel, and making breach risk predictions, it’s yet to reach a maturity level of decisioning – pointing out how to resolve them. In other words, AI is yet to become a reliable decisioning system to combat an onslaught of threats.

The critical question is: Can AI shift the human-in-the-loop model in cybersecurity?

The answer is an emphatic no. Algorithm-powered systems must partner with humans because machines are faster, but humans are more strategic. And thinking wins the day over speed.

“Systems can take care of an increasing number of attack categories. They also significantly improve response time for incidents, investigations, and day-to-day security operations. AI technology continually improves its prediction accuracy to the point where an increasing number of activities can be fully automated. Nevertheless, human defenders are the heart of security operations. Good tools empower them and help them focus their decision making and efforts, areas where humans excel,” says Yossi Naar, Co-founder and Chief Visionary Officer, Cybereason.

Experts say shifting the human-in-the-loop model, where the human is making the decision, and the machine provides decision support and some level of automation, requires an understanding of AI with high-quality decision-making about where and how to apply AI in specific areas of cybersecurity. Furthermore, using AI requires deep consideration of what the resulting automation will mean to security teams to ensure effectiveness.

A shift in the human-in-the-loop model has occurred with the application of AI towards spotting concerning patterns in complex activities seen in data centre and hybrid environments, augmenting or removing manual and legacy rule-based approaches, according to El-khayat. This has facilitated automatic responses to an increasing volume of attacks and heightening awareness of attacker methods.

“We must not think of AI as merely a means to remove humans. It should be viewed as the means to improve upon the human experience — enlightening security professionals, driving efficiencies, and arriving at outcomes with less human effort; all while being amenable to feedback that increases effectiveness over time,” he adds.

Utilising deep learning requires expertise and lots of data, and both are hard to come by, says Naar. And labelling data is difficult. “One of the biggest challenges is that it’s hard for AI to be ‘creative.’ With the threat landscape evolving and many creative criminal minds looking for new ways of executing cyberattacks, these sometimes fall far away from experience, and in these places, you need human defenders that can match wits with attackers,” adds Naar.

The problem with deep learning in any task is in the training data, and it’s been found even after that training, any event that requires action that’s significant to the organisation needs a human to review and make the final decision. “ML has repeatedly proven that it is fallible, which is usually due to the initial training data being too narrow or badly classified,” says Chappell. “When ML/deep learning fails, it’s often significant and in unpredictable ways.”

This will, undoubtedly, improve over time as data sets are cleaned. “But in cybersecurity, when it is involved in a constantly moving battle, that may leave completely automated responses beyond the capabilities of ML even with deep learning. We are all familiar with the phrase, ‘who watches the watcher,’ but ‘who teaches the teacher’ may become far more important as we move forward,” adds Chappell.

It won’t be a stretch to deduce that nothing in security is foolproof. And for now, humans and AI must work in tandem, augmenting each other, as a major defence against cyberattackers.

If you liked reading this, you might like our other stories

Your Five Biggest Data Pain Points, Solved
How Machine Identity Management Can Improve Cybersecurity