27.03.2026
Reading time: 5 minutes

AI-assisted coding through three lenses: enterprise, academia, and the boardroom

In this article​

We recently hosted a webinar on the value of AI in software engineering, and if you missed it, these are the takeaways worth your time.

What made the conversation worth having was who was in it. Alibek Datbayev manages engineering at Booking.com, one of the world’s largest travel platforms, where decisions about AI-assisted coding play out at genuine scale, across real teams, with real consequences. 

Professor Hakan Erdogmus teaches software engineering at Carnegie Mellon University and is watching up close how the next generation of developers is being shaped by these tools, for better and for worse.

Luc Brandts, CEO at Software Improvement Group (SIG), brought the view from the boardroom: what leaders need to see, measure, and govern as AI moves from experiment to part of the process.

The conversation covered a lot of ground. What held it together was a shared premise: AI is changing how software gets built — whether your teams are using a coding assistant, a copilot, or a fully agentic system that plans and executes with minimal human input. But the organizations actually getting value from it are not necessarily the ones moving fastest. They are the ones that know where AI is being used, what it is producing, and who is responsible for the outcome.

AI-assisted coding starts before the code: why requirements are the real bottleneck

Alibek pushed the conversation away from code generation and toward the work that comes before it. Most teams focus on the output; faster code, more features, shorter sprints. His argument was that the quality of that output depends almost entirely on what goes in.

His practical suggestion: use an AI agent in product manager mode. Give it instructions to interrogate your feature idea from every angle, push for precision, ask the questions a good PM would ask before a line of code gets written. Then use that output as the specification you hand to a coding agent. In his words, spend “99% of your time” defining the requirements and then see what the coding agent produces.

Whether you are using GitHub Copilot, Cursor, or a fully agentic pipeline, the same principle applies. The quality of what comes out is only as good as the clarity of what went in. As code becomes easier to generate, getting the thinking right upfront matters more, not less.

What a Carnegie Mellon classroom revealed about AI-assisted coding risks

Professor Hakan brought the most direct warning of the session. His concern is not whether AI speeds people up, it does. His concern is whether teams still understand enough to use it well. He put it simply: “they have to know what they are doing to be able to use it effectively”.

He illustrated this with a teaching exercise. He gave students a non-trivial application to build with AI, working from a detailed specification he had written and tested himself. They finished in 20 minutes and were proud of it. When he asked whether they could have written the specification themselves, the answer was no. The AI did the building. The expert did the thinking that made the building possible.

He also shared data that sits awkwardly with the productivity narrative: since introducing AI tools, effort on a semester-long project has gone up by about 30%. 

In his words: “initially we know that in the first couple of iterations they’re going faster but then they can’t integrate anything and the third iteration is integration hell and AI is not helping them that much in trying to solve that — they run in basically circles at that point.”

Cognitive debt: the hidden cost of AI-assisted coding at scale

Hakan introduced a term from a recent blog post by Peggy Storey, a professor at the University of Victoria: cognitive debt; the gradual erosion of engineering thinking that happens when people hand more and more of that thinking to AI.

Alibek recognized the pattern from industry. His concern was about what engineers lose over time: “we will start losing the essence of the software engineering as craft holders — and what will happen in a few years of time and how we go about this.”

Professor Hakan had a great analogy, he explained how using GPS navigation tools every day quietly erodes your ability to find your way without them. That is a minor inconvenience on a Saturday. In a field where reasoning through a system, understanding its failure modes, and making sound trade-offs is the core of the work, that kind of drift is harder to dismiss.

AI-assisted coding and security: why rigor matters more

Luc made the security point plainly: developers are not automatically security specialists, and AI-assisted coding does not change that. What it does change is the speed and scale at which vulnerabilities can spread. 

“You need tooling in place and you need rigor in place. You can’t rely on the code being secure.”

Package hallucination is one concrete example; AI coding assistants referencing non-existent or compromised libraries that are easy to miss without deliberate tooling. But the broader pattern holds across the codebase. 

Research shows that more than 50% of AI-generated code contains security vulnerabilities, and SIG’s own experiments echo that finding: on average, AI-generated code showed double the security risk violations compared to code written by humans. 

Automation accelerates whatever is already happening. If security discipline is weak before AI-assisted coding is introduced, it will be weaker after.

Keeping humans in the loop and what happens when AI moves faster than your review process

The value of AI-assisted coding is real — here is what determines whether you capture it

Alibek’s answer to the governance question was maybe the most practical of the session: “as long as we control the definition of what good looks like we are okay.” 

Humans stay responsible for the judgments that code review depends on, what quality means, what the architecture should do, and what risks are acceptable.

Luc put the organizational version of that plainly: “You need to understand your process, fix the first problem first, then scale.”

For some organizations that first problem is code quality. For others it is that the specifications coming from the business are too vague for any tool, AI or otherwise, to act on usefully.

However, that principle gets harder to hold as agentic systems move faster. When an agent is planning, generating, testing, and modifying code in rapid loops, the window for human review gets smaller. Keeping meaningful oversight in place at that pace is a real design challenge, one we explore in more depth in a separate piece on how agentic AI works best with humans in the loop, and a topic we will be returning to in our next upcoming podcast episode.

The governance gap is measurable: only 29% of organizations have formal oversight or processes to assess AI-generated code, leaving the majority without a systematic way to catch what their tools are actually producing.

The gap between AI ambition and the governance needed to make it safe is not closing on its own.

The organizations getting value from AI-assisted coding share a few things. They know where AI is being used. They have defined what good output looks like. And they have kept the people who can make those judgments actively involved — not just at the end of the process, but throughout it.

More output is not the same as more value. With AI-assisted coding, that gap can open faster than most teams expect. 

The good news is that it is not difficult to close. If you start with visibility, fix the right problem first, and resist the temptation to treat speed as proof.

If this article made you wish you had attended the webinar, good news. You can watch the full recording here: The value of AI in software engineering.

And if you are not sure whether AI-assisted coding is moving your codebase in the right direction, we can help you find out.

 

About the author

Picture of Werner Heijstek

Werner Heijstek

Werner Heijstek is the Senior Director at Software Improvement Group and host of the SIGNAL podcast, a monthly show where we turn complex IT topics into business clarity.

Register for access to Summer Sessions

This field is for validation purposes and should be left unchanged.
Name*
Privacy*