How to measure code quality
In this article
Summary
Code quality can be measured by looking at maintainability (something we also often refer to as build quality). Maintainability is something you measure, not guess. Use a small, consistent set of signals, benchmark them, and govern improvements across your portfolio.
Key takeaways
Maintainability is measurable and it moves with focused effort.
A tight set of metrics beats a sprawling KPI list.
Benchmarking converts raw numbers into decisions your leaders can act on.
Portfolio governance keeps improvements on track across teams and vendors.
What is maintainability and why measuring it matters
In a nutshell, maintainability is how effectively and efficiently a system can be modified to improve it, correct it, or adapt it to change. At Software Improvement Group we often refer to it as ‘build quality’.
Luckily, maintainability isn’t binary, and it can be measured on a scale. And the benefits are clear. Having a high maintainability rating has proven to lower risk and cost, speed up delivery, make your systems twice as secure, and increase innovation capacity.
You would probably assume maintainability is a high priority for many organizations. However, reality paints a different picture.
Our latest State of Software report shows that 44% of systems fall below our recommended maintainability rating, which needlessly increases costs, risks, and lowers productivity.
Maintainability in ISO/IEC 25010 (the global standard)
Before diving into what the 25010 stands for and how it is related to maintainability, let’s quickly define what ISO/IEC stands for.
What is ISO/IEC?
ISO and IEC are international organizations that develop and publish standards to ensure the safety, quality, and efficiency of products, services, and systems. The International Organization for Standardization (ISO) sets standards for a wide range of industries, while the International Electrotechnical Commission (IEC) focuses specifically on electrical, electronic, and related technologies.
What are the software maintainability metrics in ISO/IEC 25010?
ISO/IEC 25010 is the global standard that defines maintainability through five sub-characteristics you can observe in real code:
- Modularity — Is the system divided into cohesive, low-coupling components
- Reusability — Can existing code be reused safely in new contexts
- Analyzability — How quickly can you understand the impact of a change?
- Modifiability — How much effort is required to implement a change?
- Testability — How readily can you verify a change?
These factors shape developer throughput, change risk, and downstream qualities like reliability and security.
How Software Improvement Group measures maintainability
At Software Improvement Group, we measure maintainability using a 1–5 star model grounded in source-code facts and validated across industries. The model uses a TÜViT-certified dataset representing the global market, so scores are comparable and audit-ready.
Software Improvement Group’s TÜViT-Certified Maintainability Model
Our SIG/TÜViT Evaluation Criteria for Trusted Product Maintainability provides a certified mapping from ISO 25010’s abstract sub-characteristics to 9 measurable system properties in source code: Volume, unit size, duplication, unit complexity, module coupling, unit interfacing, component balance, component independence, and component entanglement.
This certification is validated by TÜViT.
What is TÜVIT?
TÜVIT is a renowned IT service provider and an independent testing institute for IT security and cyber security in digitalization. Because our model is certified by this independent testing authority, we can ensure that our 1-5 star ratings are objective, repeatable, and audit-ready across any technology stack.
The 5 key software maintainability metrics to track
To prevent metric overload and help teams focus on what to track and improve, we’ve created 5 more user-friendly software maintainability metrics.
Think of them like buckets that all draw from multiple formal properties, all grounded in the SIG/TÜViT Evaluation Criteria for Trusted Product Maintainability.
Let’s take a look at each of these software maintainability metrics.
1. Structure & modularity
- What to track: component boundaries, coupling between modules, cohesion within modules.
- Why it matters: clean boundaries make analysis and change safer; tangled code multiplies side effects.
- How to measure: static analysis of architecture layers and dependency graphs; look for stable layering and limited cross-component calls.
2. Duplication (DRY)
- What to track: exact and near-duplicate code across the codebase.
- Why it matters: duplication accelerates bugs and slows change because fixes must be applied in many places.
- How to measure: clone detection; trend the percentage of duplicated lines and the number of duplicate blocks in critical components.
3. Complexity & readability
- What to track: cyclomatic complexity, nesting depth, parameter counts, function length, naming clarity.
- Why it matters: high complexity reduces analyzability and increases regression risk.
- How to measure: per-method and per-module complexity statistics; flag “hot” files that change often and are complex.
4. Test coverage & change validation
- What to track: unit/integration test coverage, mutation testing scores, flakiness, build health.
- Why it matters: testable systems validate changes quickly and safely; low coverage slows delivery.
- How to measure: coverage reports per critical component; track failure causes and mean time to fix test issues.
5. Change stability (“hotspots”)
- What to track: files/modules with high change frequency and defect density (past 3–6 months).
- Why it matters: hotspots concentrate risk; they’re prime targets for refactoring.
- How to measure: combine VCS history (commits, churn) with incident/bug data to locate and prioritize hotspots.
From metrics to insight: benchmarking that actually guides action
Still, raw metrics can be noisy. To make them useful, it would be great if you can benchmark them against a large, comparable dataset and roll them up into evidence-based ratings your leadership team understands.
That’s where Sigrid®, our software portfolio governance platform, helps. It benchmarks your code against the world’s largest software metric database of 400+ billion lines of code across 30,000+ systems and 300+ technologies.
With portfolio-level views you can:
- Set target stars per system and track progress over time.
- Compare teams and vendors on the same scale.
- Tie improvements to business outcomes (reduced maintenance effort, faster change, lower incident risk).
Explore Sigrid®, our software portfolio governance platform, to benchmark maintainability across your entire portfolio
Frequently asked questions
It’s the degree to which software can be modified effectively and efficiently to improve it, correct it, or adapt it to change. It’s measured on a scale, not a yes/no property.
Track structural metrics (modularity, duplication, complexity), testability, and change stability. Benchmark them against an external dataset, then govern improvements over time.
None in practice build quality is our plain-English term for maintainability. It rolls up the sub-characteristics into an actionable rating your teams can use.
Stronger build quality makes it easier to apply and maintain security controls and reduces change risk. In our data, systems with above-average build quality are twice as likely to meet higher security compliance levels.