Application Security Leaders Call AI Coding Tools ‘Risky’

Application Security Leaders Call AI Coding Tools Risky (1)

The report reveals C-suite is 2x to 5x less likely than AppSec leaders to acknowledge AI-related security concerns.

GenAI has developed fast into an efficient coding facility. While there is widespread adoption by the majority of the population, the coding industry has its security concerns.

Snyk, the developer security firm, released AI readiness report findings, “Secure Adoption in The GenAI Era.” While many global enterprises have already adopted genAI code generation tools to speed application development, these report findings indicate that, in many cases, adoption best practices have been hurried or ignored in the quest to join the genAI race as soon as possible. The findings also show a clear perception gap related to associated security concerns as a result of genAI code creation, with C-Suite leaders displaying more eagerness and confidence than application security (AppSec) leaders and, in some cases, even developers.

Specifically, the report found:

  • Only 20% of organisations ran a proof of concept (POC) before introducing AI coding options, despite also noting that 58% said security was their biggest barrier to adoption;
  • Less than half (44%) of organisations provided their developers with AI-coding tool training, and,
  • CTOs and CISOs were 5x more likely than developers to believe AI coding tools pose no risk and 2x more likely than developers to believe they are “extremely ready” to adopt AI coding tools.

“The genAI Era has arrived, and there is no ‘putting the genie back in the bottle.’ We believe it’s now incumbent on the cybersecurity industry to recommend clear guidelines that will allow all of us to benefit from this increased productivity without the associated security sacrifices,” said Danny Allan, Chief Technology Officer, Snyk. “This latest research also clearly shows that scaling AI-coding tools must be a collaborative effort. CTOs should aim to work side by side and trust their DevSecOps team leaders so that together we can safely reap the full benefits of genAI over the long-term.”

Also Read: Your AI Journey – Have You Packed Everything You Need?

Closer to the Code = More Heightened Concerns

The security of AI-generated code was not a major concern for the majority of organisations surveyed. Almost two-thirds (63.3%) of respondents rated security as either “excellent” or “good,” with only 5.9% rating it as bad. However, a deeper look at these numbers reveals those “closer to the code” don’t express the same confidence as many of their colleagues.

Nearly four in ten (38.3%) of AppSec personnel said AI coding tools were “very risky.” AppSec respondents also took issue with their organisations’ security policies concerning AI coding tools. Almost a third (30.1%) of AppSec team members said their organisation’s AI security policies were insufficient, compared to 11% of C-suite respondents and 19% of developers/engineers.

Nearly 1 in 5 (19%) of C-suite respondents said AI coding tools weren’t “risky at all,” while only 4.1% of AppSec respondents agreed.

Best Practices Key As GenAI Adoption Continues to Soar

The data shows that top technology decision-makers — CISOs and CTOs — believe their organisations are already ready for AI-coding tools. In fact, 32% of C-Suite respondents described the rapid adoption of AI coding tools as critical — twice as many as AppSec respondents.

This means that regardless of AppSec and developer concerns, further adoption of these tools is on the way. While continuing on this path, these organisations should urgently implement the proper security actions that will allow them to continue to scale the rapid adoption of these tools.

Immediate recommended actions include:

  1. Establish a formal POC process for the adoption of all new AI technology;
  2. Value and prioritise AppSec team feedback regarding genAI security concerns;
  3. Document and audit all instances of AI code generation tools;
  4. Invest in security technology that provides “AI guardrails” to the adoption of AI-assisted tools over the long-term; and,
  5. Enhance and continue to augment company-wide AI training.