Gen AI Becomes the New Weapon of Choice for SOCs

Gen AI Becomes The New Weapon Of Choice For SOCs

From identifying previously unseen attack vectors and zero-day vulnerabilities through historical data to prepackaged cybersecurity plugins for incident response, Gen AI is proving to be the perfect retaliation for sophisticated cyber threats.

WormGPT, a recent generative AI tool, has surfaced on illicit online platforms, serving as a tool for malicious individuals to orchestrate highly sophisticated phishing and business email compromise attacks. Although security leaders are concerned about the negative impact of Gen AI, it’s not a losing battle. It’s about fighting fire with fire. The double-edged sword, Gen AI is also strengthening the cybersecurity front. 

A prevailing notion reverberated at the Black Hat 2023 panel as well: Security experts can evolve with Generative AI against threat actors. With its capacity for immediate, powerful tools and personalised training, Gen AI possesses the potential to revolutionise the way security experts equip themselves. 

According to a recent survey by Deep Instinct, 69% of the participants have already integrated Generative AI tools into their respective organisations’ security frameworks. The financial sector emerges as a frontrunner in adopting these tools, boasting an 80% utilisation rate. Recognising the potential, OpenAI offered a million-dollar grant for cyber defence with Gen AI.

Generative AI offerings include the ability to fine-tune models, develop applications using prompt engineering and integrate with prepackaged tools and plugins through APIs. These possibilities open a path for Security Operations Centres (SOCs) to add generative cybersecurity AI.

“SOC teams cannot keep up with the deluge of security alerts they must constantly review. Risk analysts need to speed up risk assessments and be more agile and adaptable through increased automation and prepopulation of risk data in context. Generative AI augments existing continuous threat exposure management (CTEM) programs by better aggregating, analysing and prioritising inputs. It also generates realistic scenarios for validation,” said Arun Chandrasekaran, Distinguished VP Analyst at Gartner.

The cyber-power use cases

Consider Palo Alto Networks. The company assesses approximately 750 million fresh telemetry items worldwide. Its AI models detect about 1.5 million novel attacks daily. Combining these insights with existing knowledge, the company successfully thwarts 8.6 billion attacks across its customer network daily. A Gen AI system could use all this information to provide further content and analysis that would otherwise take SOC analysts days.  

Haider Pasha, Chief Security Officer for EMEA & LATAM at Palo Alto Networks, said, “Using an automated chatbot, we have started to see security operations analysts use Gen AI to ask simple questions around the state of security and see detailed responses from the system. These will eventually form a new arena of co-pilot projects across many use cases.”

The technology can build baseline models of normal user behaviour and system activity. The AI technology can help analyse network traffic, user behaviour and system logs to identify anomalies and vulnerabilities that may pose the danger of cyber attacks. Moreover, it can help cybersecurity companies identify previously unseen attack vectors and zero-day vulnerabilities through historical data. 

Besides analysing linguistics cues and email patterns for probable phishing attempts, training the Gen AI tool to recognise malicious malware patterns in code and behaviour helps detect evolving malware strains that are often missed out. 

It also finds a place in incident response. In the event of an attack, the AI tool can analyse the impact and scope, which act as necessary insights for cybersecurity teams to develop countermeasures and recovery strategies. Companies have also started to think about Gen AI-driven training and simulations. They can train security agents with realistic simulations in a controlled environment and strengthen incident response capabilities. 

Although Generative AI offers significant cybersecurity potential, its integration would create ethical complexities and hurdles. As these algorithms derive insights from extensive datasets, ensuring the inclusivity, variety, and absence of biases in the training data becomes imperative. 

What’s out there already?

MixMode, a generative artificial intelligence (AI) cybersecurity solution provider, recently introduced a new partner program to grant partners access to its real-time threat detection and response platform. Crossword Cybersecurity is also looking for partners to help utilise the benefits of Gen AI in its products.

Exabeam, a security operations platform, recently announced expanding its partnership with Google Cloud to develop Gen AI models for its cloud-native New-Scale SIEM products. As a Google Cloud partner, this collaboration will accelerate the design of AI-based security product enhancements.

Initially, Gen AI was developed to address a specific set of use cases, primarily revolving around RPA. In the past ten months, numerous instances have emerged wherein Gen AI has been employed as botnets and query agents, serving as tools for engaging with support staff.

However, Gen AI technology is only as valuable as the data it is querying from. “Quality data underpins effective AI, which not only sets Palo Alto Networks’ solutions apart but also delivers tangible customer advantages,” Pasha added.

No new technology comes without challenges. Yes, Gen AI systems could pose more problems in the long term, but it doesn’t mean the massive benefits are to be ignored. It is crucial to ensure that Gen AI systems consistently adjust to evolving strategies and stay ahead of threats.

Top three challenges of Gen AI in cybersecurity

– Arun Chandrasekaran, Distinguished VP Analyst at Gartner

  • The cybersecurity industry is already plagued with false positives. Early examples of “hallucinations” and inaccurate responses will cause organisations to be cautious about adoption or limit the scope of their usage.
  • Best practices and tooling to implement responsible AI, privacy, trust, security and safety for Generative AI applications have yet to exist.
  • Privacy and intellectual property concerns could prevent the sharing and use of business- and threat-related data, reducing the accuracy of generative cybersecurity AI outputs.