Content

About

AI-generated code surges as governance lags

Michael Hill | 08/18/2025

Artificial intelligence (AI) generated code is becoming mainstream, but AI governance is lagging behind. Organizations are generating up to 60 percent of code with AI coding assistants, even though 20 percent still forbid them. That’s according to a new report from Checkmarx.

The security company surveyed more than 1,500 security leaders, AppSec managers and developers across North America, Europe and Asia Pacific to understand how organizations are adapting to a world where software is increasingly written by machines.

The findings paint a stark picture. Half of respondents already use AI security code assistants and 34 percent admit that more than 60 percent of their code is AI generated. However, only 18 percent have policies governing this use.

Seperate research from Clutch found that over half (53 percent) of developers think AI large language models (LLMs) can already code better than most people.

Businesses knowingly ship vulnerable code

Business pressure is normalizing risky practices, the study found. In fact, 81 percent of organizations knowingly ship vulnerable code, with 98 percent experiencing a breach stemming from vulnerable code in the past year. That marks a sharp rise from 91 percent in 2024.

Within the next 12 to 18 months, nearly a third (32 percents) of respondents expect application programming interface (API) breaches via shadow APIs or business logic attacks. Despite this, less than half of the respondents report deploying foundational security tools, such as using mature application security tools such as dynamic application security testing (DAST) or infrastructure‑as‑code scanning. 

“The velocity of AI‑assisted development means security can no longer be a bolt‑on practice. It has to be embedded from code to cloud,” said Eran Kinsbruner, VP of portfolio marketing. “Our research shows that developers are already letting AI write much of their code, yet most organizations lack governance around these tools. Combine that with the fact that 81 percent knowingly ship vulnerable code and you have a perfect storm. It’s only a matter of time before a crisis is at hand.”

6 key elements of AI governance

The report outlines six strategic imperatives for closing the application security readiness gap:

  1. Move from awareness to action.
  2. Embed “code‑to‑cloud” security.
  3. Govern AI use in development.
  4. Operationalize security tools.
  5. Prepare for agentic AI in AppSec.
  6. Cultivate a culture of developer empowerment.

“To stay ahead, organizations must operationalize security tooling that is focused on prevention,” Kinsbruner added. “They need to establish policies for AI usage and invest in agentic AI that can automatically analyze and fix issues real-time. AI generated code will continue to proliferate; secure software will be the competitive differentiator in the coming years.”

With AI now writing much of the code base, security leaders face heightened accountability. Boards and regulators rightly expect security leaders to implement robust governance for AI generated code and to ensure vulnerable software isn’t being pushed into production.

Upcoming Events


Responsible AI Summit

22 - 24 September 2025
London, UK
Register Now | View Agenda | Learn More

MORE EVENTS