Industry Commentary

AI Is Writing Code Faster Than You Can Ship It

Three major reports converge on the same finding: AI coding tools have supercharged code production, but testing, reviewing, and deploying that code hasn't kept pace. The bottleneck has moved.

· 4 min read

Share

90% of developers now use AI coding tools. Teams are generating pull requests at volumes that would have been unthinkable two years ago. And yet, deployment instability is climbing. Incident rates are up. Engineers report spending more than a third of their time on manual tasks like chasing approvals and rerunning failed jobs.

Three reports released in the past few weeks — the 2025 DORA State of AI-assisted Software Development report, Harness's State of DevOps Modernization 2026, and SonarSource's 2026 State of Code Developer Survey — all point to the same conclusion: the bottleneck in software delivery has shifted from writing code to everything that happens after.

The numbers

The Harness report, drawing from 700 engineers and technical managers across five countries, puts the picture into sharp relief. 45% of engineers who use AI coding tools multiple times a day deploy to production daily or faster. That sounds good until you see the other side: 69% of those same heavy users say they "always" or "frequently" experience deployment issues when AI-generated code is involved.

SonarSource's survey of 1,100 developers found that PR volumes have increased by 320% and deployment frequency by 280% — but incident rates have climbed 190%. Operations team headcount hasn't changed. The work of creation has been automated. The work of verification has not.

96% of developers surveyed said they do not fully trust the functional accuracy of AI-generated code.

What DORA found

The DORA report frames this differently but arrives at the same place. Their central finding is that AI acts as an amplifier of existing engineering conditions. Teams with loosely coupled architectures, strong automated testing, and fast feedback loops see real gains. Teams without those foundations see instability.

This is a reversal from last year's DORA findings, where AI adoption showed no clear link to delivery throughput. The difference in 2025 is that enough organisations have now restructured around AI-assisted workflows to produce measurable signal. But the signal cuts both ways — the gap between mature and immature engineering organisations is widening.

73% of engineering leaders in the Harness survey said hardly any of their teams have standardised service templates or golden paths. That means most organisations are running AI-accelerated development through ad hoc pipelines, with manual gates and inconsistent testing.

Where the debt accumulates

When you write code faster but review, test, and deploy at the old pace, you get a queue. That queue has a cost. Pull requests sit longer. Reviews become cursory because there are too many to do properly. Defects ship to production because testing coverage was designed for human-speed output, not AI-speed output.

SonarSource found that AI-generated code introduces significantly more privilege escalation paths and design flaws than human-written code — but it often passes review because it looks correct. The surface quality is high. The structural quality is not.

This is a familiar pattern in engineering. When you optimise one part of a system without optimising the parts it feeds into, you don't get faster throughput. You get a bigger pile of work-in-progress and more failure modes at the handoff points.

What actually helps

The DORA data is clear on what works: platform engineering. Organisations with mature internal platforms — standardised CI/CD, self-service infrastructure, automated security scanning — convert AI productivity gains into actual delivery improvements. 90% of organisations surveyed have adopted at least one internal platform, and the quality of that platform directly correlates with AI effectiveness.

The practical implication is straightforward. If your team is adopting AI coding tools but hasn't invested in automated testing, deployment pipelines, and review tooling at the same rate, you're building pressure, not velocity. The code is being written. The question is whether your systems can absorb it.

The bottleneck has moved. It's time the investment followed.

Want to discuss this?

We write about what we're actually working on. If this is relevant to something you're building, we'd love to hear about it.