State of Security 2026: Application Security

proudly sponsored by
Sumo Logic
Saviynt
Virtual IT GroupVirtual IT Group
Coreview
Brennan
Rubrik
Checkmarx

For decades, application security has provided the necessary friction that prevents coding errors from escalating into systemic risk. But the growing use of AI for accelerating software development is threatening to outpace the controls that organisations traditionally rely on.

The pressure on application security controls is a direct result of software development being one of the areas where AI’s impact has been both most visible and most immediate. How resilient these controls prove to be will depend greatly on the ability of appsec specialists to adapt to this fast changing world.

 

While the numbers vary between studies, the percentage of code created with AI assistance is growing rapidly. A 2025 Stack Overflow Developer Survey found 84 percent of respondents were using or planning to use AI tools in their development workflow, with 52 percent reporting it had a positive impact on their productivity. This was despite 46 percent not trusting the accuracy of AI tools, suggesting a gap between increased productivity and confidence in output quality.

This naturally creates a challenge for traditional appsec models, which were designed for environments where code was written by humans at a pace that allowed for staged review, testing, and remediation.

The growing role of AI in code generation however raises risks from hallucinated libraries, credential leakage, reproducibility and audit gaps, and the creation of insecure dependencies at scale.

While in recent years appsec has strived to strength practices by shifting security processes earlier into the development cycle (a process often referred to as ‘shift left’), the requirement today is for appsec to become a continuous, integrated capability that matches development flows.

A key component of this shift is tooling, and it is not surprising to see that AI is offering solutions to its own problems by accelerating changes in the application security market.

According to Grand View Research, the global AI application security market was valued at US$4.47 billion ($20.23 billion) in 2024 and is estimated to grow rapidly at a compound annual growth rate of 21.6 percent to 2030.

This covers a broad range of tools, including AI-enhanced static application security testing (SAST), which analyses source code with increasing contextual awareness, and AI-enhanced dynamic application security testing (DAST), which tests running applications using adaptive, behaviour-driven approaches.

The rapidly increasing use of open-source code within applications is also leading to a rise in the use of AI-driven software composition analysis (SCA) to track dependencies and supply chain risk.

There is also a rapidly emerging requirement for tools that protect AI applications themselves, with Gartner finding that by 2028, at least 50 percent of organisations that operated public-facing AI-enabled applications would use AI application security capabilities to protect them.

Gartner’ recommendation is to invest in solutions that support multiple methods of discovery and can discern between first and third-party AI applications, and for testing to favour tools that provide continuous, adaptive testing and flexible retesting, and that can deliver a superior developer experience.

When it comes to the future of appsec, the challenge is less about whether organisations can secure their applications, and more about whether they can do so at the speed at which they are now being created.

Case Study- Monash Uni

Application security teams are facing a widening gap between vulnerability detection and remediation thanks to the rapidly growing use of AI in both software development and vulnerability scanning, according to Monash University’s application security lead Luke Bampton.

Bampton and his colleagues support more than 40 development teams across Monash, which in recent years has grown into a global higher education and research organisation with 98,000 students and more than 20,000 staff.

“With our vision to be global leaders in higher education comes a matching digital footprint, which includes half a million IP addresses,” Bampton said.

“When you combine that with the innovation and collaboration appetite that is unique to higher education, cyber threats simply become a way of life.”

The university’s developer community presents its own challenges, with skill levels ranging from experienced engineers to undergraduate students. Rather than enforcing a single set of tools, Bampton focused on ensuring consistent security outcomes.

“They want to do the right thing, and application security is the enabler that helps them do that properly,” he said.

That approach is under increasing pressure as developers accelerate their work using AI tools. While Bampton said AI had improved the speed of identifying vulnerabilities, organisations were struggling to translate that into faster remediation.

“We are dealing with pre-AI ways of working that weren’t designed to fix problems in production ‘yesterday’,” he said.

“It has never been faster to identify vulnerabilities, but there is still a lag in the time to remediate.”

Recent developments such as Anthropic’s Mythos had further highlighted this shift, demonstrating how generative AI could move upstream into application security. Bampton said the next step would be trusted AI-driven remediation.

“It will only be a matter of time before AI is able to assess, review, and secure code,” he said.

“But for now, we are living in a world where AI is a force multiplier for delivering digital products, while the security aspects are still catching up.”

Despite these changes, Bampton said the fundamentals of application security remained unchanged, with communication and relationships playing a central role.

“That starts with a conversation,” he said.

“You are not going to be able to help people if they don’t know who you are or don’t feel that you are approachable. It’s about taking what is traditionally a technical problem and turning it into a marketing and awareness challenge.”

Communication was also a critical aspect when educating developers about the growing risks from software supply chains, particularly through open-source dependencies.

“Supply chain attacks are through the roof, and developer credentials are under significant threat,” he said.

“In the age of AI, you still need vulnerability scanning and mechanisms to investigate third-party libraries.”

At the same time, Bampton warned against over-reliance on AI tools, noting the risk of developers losing critical security skills through “cognitive offloading”. Education, he said, remained a core priority.

Bampton said the university’s application security strategy was also evolving in response to the commissioning of its MAVERIC AI supercomputer. He said Monash had deployed a small fleet of dedicated AI development machines in addition to the supercomputer to further support researchers with local, controlled access to advanced compute resources.

“As an application security practitioner, a lot of my work is shifting towards AI and non-deterministic ways of working,” Bampton said.

“I am optimistic about our ability to guide developers on how to use this technology responsibly.”

“But the fundamentals still hold. At the end of the day we want secure code, functional code, and robust solutions that scale.”

Browse by Category

Click on the tiles below to see how each of the categories are responding to security threats in their sector.

Security Champions

The 2026 State of Security sponsors have worked tirelessly to improve the safety of end user organisations.

We are proud to present this year's State of Security champions, and showcase the work they do.

Sumo Logic
Saviynt
Virtual IT GroupVirtual IT Group
Coreview
Brennan
Rubrik
Checkmarx

Log In

  |  Forgot your password?