20.02.2026
Reading time: 5 minutes

AI and private equity: how AI is changing software due diligence

In this article​

One of the most important questions that investors these days have is: what am I buying?” — Erik Oltmans, Partner EY-Parthenon.

That line comes from the SIGNAL podcast episode where I had a conversation with Erik Oltmans about AI code liabilities and PE investments, and it could easily be the headline for the entire episode.

However, while we primarily spoke about AI, the question itself is, of course, pretty timeless.

When you acquire a company, you’re not just buying a brand. You’re acquiring a complex software portfolio. That means you’re also taking on years of architectural decisions and tech debt derived from potential engineering shortcuts, none of which typically show up in the Confidential Information Memorandum (CIM) of your target company.

As we discussed in another article on software quality and valuation, a lack of clarity about the state of software can lead investors to overpay, inherit hidden costs and security issues, and miss opportunities to grow value after the acquisition.

And in the last few years, AI has very rapidly raised the stakes and is changing how investors need to look at software.

Why?

Code that used to take hours now appears in seconds. Entire features and components can be scaffolded from a prompt. The kind of effort that once signalled a strong technical moat can, at least for some functionality, be compressed — or even replicated — by a competitor with the right tools and domain knowledge.

As an investor, you’re now left with a different set of questions:

  • What am I buying if AI wrote (part of) this system?
  • How easily could someone else rebuild what I plan to buy?

In this SIGNAL episode, Erik Oltmans, who leads product and tech due diligence at EY-Parthenon, dives into how AI is reshaping software investment.

Together we unpacked what changes when “who wrote the code” is no longer obvious, why systems of record and governance matter, and what proof investors should look for in the AI era.

AI is everywhere, but especially in the conversation

“I think AI is quite dominant now in all the conversations that are there.” Erik Oltmans, SIGNAL podcast (4:04).

AI shows up in almost every deal conversation Erik has today.

It’s not that every product he sees is secretly written by AI, actually far from it.

In our AI ambition and reality in Private Equity 2026 report, we see that AI-assisted coding is common, but adoption is uneven, and governance often lags. Only a minority of organisations can say with confidence where AI-generated code sits in their systems and how it is controlled.

So, when an investor asks, “What exactly am I buying?”, they’re really asking a layered question:

  • How much of this product was generated or accelerated by AI?
  • What risk does that introduce in terms of security, maintainability, and compliance?

Those are not yet standard questions in most IT due diligence (ITDD) engagements. They should be.

AI-generated code typically has more security findings

There was a point in the episode where we moved from theory to something very concrete: security.

I mentioned that when you generate a lot of code quickly, you run the risk of generating a lot of vulnerabilities as well. Erik agreed, and his explanation was disarmingly simple:

“Yes. And it can also be explained, right? I mean, the models are trained on current software bases, code bases that are out there, which contain bugs, which contain security flaws. So, therefore, it’s also obvious that new security flaws will be introduced in newly generated code.” — Erik Oltmans, SIGNAL podcast (9:01).

In the AI Boardroom Gap report, we ran an experiment to quantify this. We generated code using several popular large language models and compared them to human-written implementations. Functionally, the AI-generated systems often looked good. But from a security perspective, the picture was different: on average, the AI-generated projects showed roughly double the number of security risk violations compared to the human projects.

This is an image of Werner & Erik talking to each other in front of the Software Improvement Group office

Why traditional technical moats will be easier to replicate with AI

Towards the end of the episode, our conversation shifted from risk to value: what still counts as a technical moat when AI can generate so much code so quickly?

Erik challenged that head-on. He pointed out that we’ve often treated “lots of code” as a moat in itself, and then said, almost casually, that this argument is now losing its power. The traditional metrics—lines of code, development years, sheer engineering scale—no longer tell you much about how defensible a product really is in an AI world. With generative tools, large parts of a system can be recreated much faster than before, at least where the functionality is generic and well understood.

That’s where replicability comes in. If a competitor with access to similar models and a capable team can rebuild most of what you see in a demo, then the fact that it once took “100 person-years” is less meaningful. The volume of code is no longer the main barrier.

“We have also seen that software companies were sort of regarded as very attractive because they had a very strong technical moat, being they had large amounts of software that would be sort of challenging to replicate. And this was sort of safe position, because you knew for sure that 100 person years of effort or maybe 1000 person years of effort would not easily be replicated. This has become irrelevant.”Erik Oltmans, SIGNAL podcast. (15:52)

What remains much harder to copy is the understanding behind the code. Erik summed it up by shifting the focus away from the software itself and towards the knowledge it encodes. He referenced a Financial Times interview with Thoma Bravo, one of the world’s largest, most prominent software-focused private equity firms, stressing exactly this point: it’s not just about the software, but about the domain knowledge.

In other words, it’s no longer about simply being able to build a complex system. It’s about how well the company understands a specific industry, how deeply they know the processes they automate, and how thoughtfully they’ve translated that into architecture and product decisions. AI can help you write code more quickly; it does not automatically give you that domain insight or a robust system design.

For investors, this means looking past the impressive-sounding numbers. A large codebase and a sizeable R&D team still matter, but they are weak indicators of moat on their own. The more useful questions are about what is actually hard to reproduce: the domain knowledge that’s built into the product, and how well it is captured in the software you’re buying.

We should update the investor questions for the AI era

When we framed this SIGNAL episode as “AI code liability”, the core question: What am I buying? Has always been the question in software deals.

What AI changes is the opacity and the replicability of what you’re buying. AI now shows up in almost every deal conversation, but not yet in a clear, governed way in most codebases.

It can generate large amounts of working code in seconds, yet that code behaves like a black box and tends to carry more security risk if nobody looks underneath. And the traditional signals of a strong technical moat big codebase, long history, large team are getting weaker as generative tools make generic functionality easier to copy.

That doesn’t mean investors should avoid companies that use AI. It does mean the standard “Is the software solid?” question needs an update. After this conversation with Erik, there are a few questions I’d add to every IT due diligence scope:

  • Where and how do you use AI in your development process today, and how do you track that?
  • What guardrails do you have around AI-generated code, and what do they look like in practice?
  • How do you find and fix security and quality issues in AI-generated code today?
  • If a capable competitor had access to AI tools and models, how easily could they rebuild what you’ve built?

You can’t realistically read every line of code in a complex system, especially in a time-pressured deal. But you can insist on clear answers to these questions, and on evidence that someone has looked inside the black box.

If these issues sound familiar in the deals you’re working on, I’d encourage you to listen to the full conversation with Erik Oltmans. If you want an independent view on what AI has really done to the software portfolio you’re considering acquiring, my colleagues and I at Software Improvement Group are always happy to help.

About the author

Picture of Werner Heijstek

Werner Heijstek

Werner Heijstek is the Senior Director at Software Improvement Group and host of the SIGNAL podcast, a monthly show where we turn complex IT topics into business clarity.

Experience Sigrid live

Request your demo of the Sigrid® | Software Assurance Platform:
  • This field is for validation purposes and should be left unchanged.

Register for access to Summer Sessions

This field is for validation purposes and should be left unchanged.
Name*
Privacy*