29.01.2025
Reading time: 6-7 minutes

Private equity challenges: The risks of investing in inflated or fake AI

Eddy Boorsma and Ivy Chen

In this article​

Summary

As the “gold rush” for AI investments in private equity surges, the risks of AI washing and misrepresented capabilities have become a serious challenge. Overvaluation, failed ROI, cybersecurity vulnerabilities, and regulatory breaches are just a few pitfalls threatening these deals. 

To navigate these risks and identify genuine, high-value AI assets, private equity professionals must adopt rigorous AI-focused due diligence strategies.

Key steps include: 

  • Detecting AI washing to confirm advertised AI capabilities are authentic. 
  • Validating model quality to avoid overpaying for assets that effectively underperform. 
  • Assessing engineering maturity to avoid continuity risk.
  • Ensuring compliance and security to prevent legal and reputational damage. 

This article offers actionable insights to safeguard your AI investments, helping you maximize returns while avoiding costly surprises. 

Image of a modern glass and metal building with reflective blue-tinted windows under a clear sky.

Artificial Intelligence: A value-creation-cure or private equity's poison pill? 

Since the launch of ChatGPT in 2022, artificial intelligence has drawn immense interest from all sectors—especially private equity. And for good reason: 
 
The potential for AI-powered acquisitions to skyrocket portfolio value seems limitless. According to this survey, 59% of PE executives believe AI will drive significant value in their PortCos, and 87% of firms currently leveraging AI already see or expect returns within 18 months. 
 
And the valuation of leaders in the market echoes this sentiment, despite challengers such as the Chinese Deepseek initiative: 

But here’s the catch: not all AI is created equal. 

And it is exactly this insight that has the potential to jettison tech deals, destroy investment theses, and crush your ROI.  
 
The problem? Often times the AI package on offer doesn’t just seem too good to be true, it is.  

The hidden threat of inflated or fake AI

Misrepresented AI capabilities—or outright “fake AI”—pose a growing risk to private equity deals.

These risks can: 

  • Lead to overvaluation and unmet expectations. 
  • Open the door to cybersecurity vulnerabilities or compliance breaches. 
  • Result in delayed ROI, or worse, failed investments. 

To avoid these pitfalls, you need to separate hype from substance and identify AI assets with true value potential. This requires a robust due diligence framework—or in plain language, an ‘AI Bullsh*t’ meter. 

The dual role of AI in private equity investments

Before we discuss the risks associated with fake AI, let’s look at exactly why artificial intelligence has begun to play such an important role in PE investments. 

Tech-led investments, mergers, IPOs, acquisitions, and LBOs have taken a hit in recent years. Following the M&A boom of the Covid-19 years, things haven’t been looking great for the tech sector. That is until recently.  

With emerging advancements in artificially intelligent software—from AI-powered solutions to deep-learning large-language models—experts expect to see a continued comeback for tech M&As in 2025. 

According to the BBC, back in 2022, only 10% of tech start-ups mentioned using AI in their pitches, whereas in 2023 this rose to 25%. OpenOcean, a UK and Finland-based investment fund for new tech firms, expects that more than 33% of tech start-ups will have mentioned AI in their 2024 pitches. 

But this is where we need to start paying attention, how real is the pitch? 

Over-eager tech investors, scared of missing out on the AI gold rush, could be running blindly into wildly overvalued deals. In essence, whilst there is genuine value to be found in AI acquisitions, “separating [this] potential from [the] hype is challenging.”  

Even market leaders show the gap between valuation and performance: 

These inflated valuations can blind investors to the true value of an asset.  

Even more worrying, the excitement generated by the potential value is allowing opportunists, fraudsters, charlatans, and pirates to sell assets disguised as AI at hugely inflated prices—only for the investors to find out post-close that what they’ve acquired is a worthless dud.  

In March 2024, for example, the U.S. Securities and Exchange Commission (SEC) found companies Delphia (USA) Inc. and Global Predictions Inc. guilty of “making false and misleading statements about their purported use of artificial intelligence (AI)”. Caught in the act, the investment firms settled, agreeing to pay a combined $400,000 in civil penalties. 

The future of AI investments could be worth upwards of $1 trillion by some estimates, but if investors don’t learn how to separate the wheat from the chaff—and as a result begin to generate meaningful value from their acquisitions—then the AI boom could be just another bubble waiting to burst. 

The 3 major risks of inflated or fake AI for private equity

So, what are the private equity challenges and risks associated with investing in fake or Inflated AI?  

Let’s look at three of the key risks to be aware of: 1) AI washing, 2) overvaluation and diminished returns, and 3) ethical issues, compliance, and security vulnerabilities. 

1. AI washing

“AI washing” is what we call it when solutions are branded as “AI-powered” when in actuality they lack AI-powered value. In other words: Inflated or fake AI.  

There are different types of AI washing. Some companies say they use AI, but they actually use less powerful technology. Others exaggerate how effective their AI is compared to older methods or claim their AI solutions work when they don’t. And some businesses merely attach an AI chatbot to their existing non-AI software. 

Remember Amazon’s highly-criticized “Just Walk Out” service? 

Last year, Amazon launched an AI-powered system that allows customers at Amazon Fresh and Amazon Go locations to pick up items and leave without checking out. The system uses sensors to identify what customers choose and automatically charges them.  

However, reports in April revealed that over 1,000 workers in India have to manually check nearly 75% of the transactions, which raised concerns about how much the system relies on AI.  

In turn, Amazon denies these reports but admits that people are involved. The company states that workers help improve the Just Walk Out system by annotating AI data and real shopping activity, rather than running the entire process.  
 
No matter who’s right, at the very least this is a worrying development. 
 
In the US, The SEC has expressed clear concerns about businesses either misrepresenting or purposefully exaggerating their software’s AI capabilities—something which private equity legal teams need to be aware of, warned Alejandro Mosquera in a recent webinar hosted by Major, Lindsey & Africa.  

2. Overvaluation and diminished returns

Put simply, an AI model is only as good as the engineering process including the source code it’s built with and the quality of data it’s trained upon. But even still, it’s often not that simple: Delivering an AI system that performs as expected in the real world is very challenging.  
 
Here at Software Improvement Group, we found that nearly 75% of AI systems face severe quality issues as our colleague Rob van der Veer explained during our IT-leadership event SCOPE 2024: 

“AI systems tend to have a very low maintainability score, when we look at them in more detail we see very little documentation, very little testing, security issues, privacy issues Why? Well, Data scientists –AI engineers– are focused on creating working models, they haven’t been trained or managed sufficiently to create working models that need to work in reality.” – Rob van der Veer. 

Let’s look at a few famous business examples where promising AI systems did not live up to their expectations.

IBM’s Watson

In February 2011, IBM made history when their DeepQA computer Watson won the ‘Jeopardy!’ TV quiz show.  

Two years later IBM found a commercial use for Watson and introduced Watson Health. It aimed to revolutionize healthcare by providing insights for oncologists, assisting drug development, and linking patients with clinical trials.  
 
However, it did not meet expectations. While Watson was well-known for its ability to understand natural language, it struggled because lack of robust testing and validation failed to catch its difficulties with unstructured clinical data and handling complex diseases. Part of the issues were caused by poor data integration and handling. Add the technical challenges with a multitude of market deployment issues, and you can see how it led to its downfall. 
 
In January 2022, after spending billions of dollars, IBM announced that it would sell off the healthcare data and analytics assets housed under its Watson Health unit. 

Google Bard

In February of 2023, Google tried to keep up with Microsoft-backed ChatGPT and announced ‘Google Bard’ as their new and experimental conversational AI service that would be integrated into their search engine. 

“Use Bard to simplify complex topics, like explaining new discoveries from NASA’s James Webb Space Telescope to a 9-year-old.” their press release read. And then, even before it was properly launched, it failed.  

What happened? 
While Bard’s answer claimed that the James Webb Space Telescope recently took the first images of an exoplanet, this was not correct. According to NASA, the first image showing an exoplanet was taken by the European Southern Observatory’s Very Large Telescope in 2004. 

Following this misinformation, shares dropped 7.7%, losing $100 billion in market value, and that was it for Bard. By the end of 2023, Google announced that their AI chatbot was being renamed ‘Gemini’.  

3. Ethical issues, compliance, and security vulnerabilities

Both the private equity business landscape and the realm of AI are coming under increasing scrutiny from regulatory bodies. AI laws like the EU AI Act, as well as a potential US AI Act on the horizon, are clamping down on AI transparency, quality, and use cases. For PE firms, this means due diligence teams must be extra vigilant when assessing an AI target’s compliance with the law.  
 
Failure to conduct a compliance assessment could land you in a whole lot of hot water post-acquisition. Financial penalties, costly remediation of the non-compliant model, and even legal accusations can follow the acquisition of bogus AI. 
 
Secondly, there are the ethical and security concerns associated with inflated AI or poorly built AI systems.  
 
On the ethical side, failure to detect faults in target AI assets can lead to major missteps post-acquisition. 
 
What can go wrong?  Let’s look at a few examples. 

MyCity

Launched to provide New Yorkers with information about starting and running businesses in the city, as well as guidance on housing policies and worker rights, New York City launched ‘MyCity’, a Microsoft-powered chatbot. 

Image of the New York City skyline.

However, it was reported that it gave entrepreneurs incorrect information that would lead to them breaking the law. The Mark Up, discovered that ‘MyCity’ falsely stated that business owners could take a portion of their workers’ tips, fire employees who report sexual harassment, and serve food that had been nibbled by rodents. Additionally, it incorrectly claimed that landlords could discriminate based on the source of income. 

COMPAS

COMPAS, or Correctional Offender Management Profiling for Alternative Sanctions, is a case management algorithm created by Equivant. Used by U.S. courts, it assesses the likelihood of defendants re-offending and is employed in jurisdictions like New York, Wisconsin, California, and Broward County, Florida. 

The ethical problem? It’s bias potential.  

In 2016, ProPublica published a study about COMPAS that found that black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk. Next to this, the risk scores were also “unreliable in forecasting violent crime: only 20% of the people predicted to commit violent crimes actually went on to do so. 

Recently, Equivant stated that these claims have been extensively scrutinized and challenged. The company also clarifies that COMPAS is a risk assessment tool designed to provide probabilistic estimates, not definitive classifications, and it operates within a framework that aims to assist rather than dictate justice-related decisions. 

Either way, these examples paint a grim picture of what can go wrong. This is highlighted by the fact that the EU AI Act forms a basis to consider taking a system like COMPAS to court. 

How to verify AI capabilities before the tech acquisition 

Hopefully, by now we’ve established that as more and more PE companies pursue AI acquisitions, the ability to expertly assess the quality and veracity of AI/GenAI targets will become fundamental to the success of the deal.  
 
Likewise, teams will have to incorporate the costs of these new software due diligence measures—such as source code evaluation and product testing—into their investment theses. 
 
The question is, how do you tell if AI is worth the investment?  
 
What if we told you can accurately evaluate your potential AI-powered acquisition? At Software Improvement Group, we’ve developed a comprehensive approach. Indeed, we can make sure that you know everything you need to know about the AI systems you are considering acquiring. 

Artificial intelligence offers unprecedented opportunities for private equity, but it comes with significant risks. Rigorous AI validation during due diligence can help PE firms navigate the complexities of this booming market and unlock genuine value. 

But how?  

1. Assess the AI engineering quality

In its essence, AI is software. That’s right, think of the source code as the skeleton and muscles of the AI model, while the data it is trained on serves as its central nervous system. So, the software in an AI system prepares and handles the data, orchestrates the model, and integrates it into a solution. 

This underpins the importance of rigorous software engineering best practices such as controlling code quality, version management of models and data, supply chain management, automated testing, and documentation.  This is critical to how reliable, scalable, and secure the system is and how well it can be integrated or adapted to new circumstances. In addition, sound technology choices make the difference between being state-of-the-art and getting overtaken. 
 
Poor engineering quality limits scalability, complicates integration, and makes it harder to align the AI asset with your portfolio strategy. Worse, systems lacking proper documentation or maintainability can increase dependency on the original developers, creating risks for continuity if the asset changes hands. 

During due diligence, you want to ensure the AI asset is built with maintainable code and robust engineering standards. If the system falls short, determine whether improvements are feasible and calculate the costs of remediation to include in your valuation. 

2. Evaluate the dataset

To go back to our analogy: what feeds the nervous system of your to-be-acquired AI solution?
  
How was the data acquired, annotated, and prepared? This is usually a significant undertaking due to the scarcity and specificity of required data. 

Has there been extensive experimentation to determine the most effective data sources, preprocessing techniques, algorithms, and training approaches? Who was involved, was it a team of data scientists that accidentally created the perfect storm or a skilled, multidisciplinary team that meshes AI expertise with domain-specific knowledge? 

High-quality proprietary data can act as a competitive moat, enhancing asset value and differentiation. However, reliance on poorly maintained or external datasets can expose your investment to performance interruptions or reduced ROI. 

You want to ensure the dataset is well-documented, ethically sourced, and accompanied by long-term access agreements where applicable. This safeguards against interruptions in performance and adds value to the asset’s intellectual property. 

3. Analyse compliance risks and model effectiveness

The ever-evolving AI-regulatory landscape in the US, EU, and further afield, combined with stricter competition and anti-trust laws targeting private equity, means AI acquisition could lead PE firms into a regulatory minefield. The second core component of an ‘AI Bullsh*t Meter’ must, therefore, be an analysis of AI performance metrics, data biases, use cases, and transparency in the context of the intended use, potential misuse, and all relevant regulations.  

We can assess AI models for biases and inaccuracies to ensure compliance with ethical standards. Evaluations include fairness metrics, performance metrics, and actionable strategies to promote equitable and performing AI solutions. 

Non-compliant or biased models can expose your investment to significant financial and reputational risks, reducing the asset’s marketability and long-term value. 

You will want to conduct an in-depth compliance assessment during due diligence, focusing on metrics such as fairness, bias detection, and adherence to industry standards. Develop actionable strategies to address gaps before acquisition or integration. 

Conclusion

Artificial intelligence could very well drive the future of tech investments worldwide, especially in the private equity market. The emerging technology is likely to draw upwards of $1 trillion in investments over the next several years, but experts are divided on whether the boom will last… or burst.  
 
One of the major reasons for this uncertainty is the very real risk of massive overvaluation in AI-focused M&A, combined with the challenges of realizing genuine value from AI assets.  
 
Adding AI validation to your IT due diligence strategy is not just important but invaluable to your future PE tech acquisitions. 
 
For further assistance, expert advice, and consultation on AI verification and validation, get in touch with Software Improvement Group today. 

SIG colleagues in a conference with Sigrid being displayed on the main screen.

Experience Sigrid live

Request your demo of the Sigrid® | Software Assurance Platform:
  • This field is for validation purposes and should be left unchanged.

Free AI readiness guide for organizations

Practical steps for executives and IT leaders to successfully implement AI in business