AI in financial services: The risks of poor software quality
In this article
Summary
Our finance signals 2025 report highlights the growing risks tied to poor software quality as financial service industry (FSI) institutions ramp up their adoption of AI and big data technologies. While the pace of innovation is accelerating, many systems aren’t built to support AI reliably at scale.
Our data shows that 73% of AI and big data systems have quality issues, raising concerns about their long-term reliability and compliance. Unlike traditional software, AI requires new approaches to security, maintainability, and compliance, especially as models degrade over time and must be continuously validated and retrained.
Without strong foundations in place, poor AI maintainability can drive up costs and increase risk exposure, limiting the value financial institutions can extract from these technologies.
Read the full report to understand how FSI organizations can build trustworthy, future-ready AI systems.
Introduction
Over the last decade, financial institutions have shifted from manual, paper-based workflows to fully digital operations. As more services move online, they’re generating massive amounts of financial data, everything from real-time transaction logs to behavioral insights and spending patterns.
To make use of all this data and improve service delivery, many are now investing heavily in artificial intelligence (AI). And the potential applications range far and wide. According to EY, AI is already reshaping financial services, from automating fraud detection to enabling hyper-personalized customer experiences.
What was once experimental is quickly becoming core to how modern FSI organizations operate.
AI vs. traditional software
While the potential of AI in financial services is huge, it also comes with a different set of risks, many of which traditional software teams aren’t used to managing.
Unlike conventional financial software, which follows clear, pre-defined rules, AI systems learn from data and make decisions autonomously. This makes them powerful, but also unpredictable if not carefully managed.
According to ISO/IEC 5338, co-developed by Software Improvement Group (SIG), AI systems have distinct characteristics: they learn from data, adapt over time, and can interpret or simulate human behaviors, such as speech or vision. Most importantly, they require ongoing retraining to stay accurate and relevant as conditions change.
What sets AI apart from traditional software is that it makes decisions based on patterns, not fixed rules. That means systems can drift over time, degrade in performance, or behave in ways developers didn’t intend. This introduces a range of risks from security vulnerabilities and bias to compliance issues and customer mistrust.
To manage these risks, financial institutions need to approach AI with the same discipline as any other mission-critical system: with strong engineering practices, continuous validation, and thoughtful oversight.
The AI maintainability crisis
Our benchmark data reveals a growing concern in the way AI systems are built and maintained. Most AI and big data systems suffer from low maintainability and poor testability, making them harder to update, monitor, and trust over time.
In fact, 73% of AI and big data systems score below the benchmark average for maintainability, with an average rating of just 2.7 stars, significantly lower than what we see in traditional software systems.
This gap matters. Poor maintainability slows teams down, increases operational costs, limits adaptability, and makes it harder to ensure ongoing accuracy and compliance.

Why is this happening?
Our research highlights two major issues behind poor AI maintainability:
- Complex, bloated code: AI systems often include long, unfocused code blocks that handle multiple responsibilities. This makes them harder to modify, understand, and reuse.
- Lack of testability: AI and big data systems contain just 1.5% test code, compared to 43% in traditional systems. This makes it more difficult to catch errors and increases the risk when updating AI models.
Building reliable AI: Best practices
To keep AI stable, secure, and scalable, financial institutions need to move beyond one-off experiments and treat AI like any other enterprise-grade software system. That means applying proven software engineering principles from the start.
Key practices include:
- Applying software engineering best practices: AI development should follow modular, maintainable, and well-documented coding standards.
- Bridging AI and software engineering teams: Involving software engineers in AI projects supports long-term stability and better system integration.
- Strengthening AI governance: FSI organizations must establish clear policies for AI transparency, ethics, and regulatory compliance.
- Implementing continuous validation: AI models degrade over time. Ongoing retraining and rigorous testing are critical to keeping models reliable and effective.
The path forward
AI is still software, but it doesn’t behave like the systems we’ve known before. It learns, evolves, and makes decisions on its own, which means it needs constant oversight and adaptation.
AI’s ability to process large volumes of data and interact through speech, vision, or even robotics makes it incredibly powerful. But that same complexity brings serious risks around security, reliability, and trust.
To ensure AI systems remain safe and effective, FSI organizations must embrace strong engineering practices, maintain clear documentation, and commit to ongoing validation and retraining.
Download our free AI readiness guide for a structured approach to integrating AI into your software landscape.
For more FSI insights, check out our latest report, finance signals 2025, packed with exclusive benchmark data and 25 years of expertise optimizing financial IT. It’s a must-read for CIOs, CTOs, and technology leaders aiming to make confident, strategic decisions in a rapidly evolving environment.