In part 1 of this series, we explained a three-phase approach to successful, cost-effective legacy system modernization:
Phase 1: Prioritize your systems for modernization
Phase 2: Determine modernization scenarios based on measurements of key architecture risks
Phase 3: Monitor build progress continuously to stay the course
In this article, we’ll delve deeper into phase two and elaborate on why and how six critical architecture risks should be measured and used to develop the modernization project plan.
Successful modernization is incremental and based on insight into architecture risks
“Legacy” is actually much more than just outdated technology. Rather, it’s any pre-existing software solution that’s become too fragile for change, usually due to poor architecting or team knowledge loss. That means it can no longer evolve to support the ever-changing business goals for which it was originally created.
Based on our extensive experience in guiding large-scale legacy transformation programs, we’ve identified six key architecture quality risks that must be measured to determine the smartest scenario for system modernization.
Each risk represents the degree to which a system can be easily changed and/or evolved. Can changes to various parts be made in isolation, without fear of breaking another elsewhere? Can components be continually updated and adapted as the system grows?
The answers stem from how tightly coupled the components are within, and together, these six architecture risks provide a clear picture:
- Structure: The grouping of functional and technical areas in a code base
Unlike modern systems with microservices or service-oriented architectures, legacy systems typically lack clear boundaries between functional and technical areas. The extreme example is a huge monolith with various problem areas in one big box. This makes it difficult to navigate issues, distribute maintenance effort, and extend functionality.
- Communication: The dependencies between functional or technical areas in a code base
A common issue for legacy systems is “spaghetti code,” where dependencies are too intertwined to analyze and understand the functional logic mapped to code. The result is then that no one dares to touch any piece of code, as any change could lead to issues in production.
- Data Access: The way in which data is read or written to/from a database or other repository
There’s usually only one big database for multiple functional areas of a legacy system. When many parts of a system depend on a single database, modifications to the data structure will very likely result in a cascading effect of modifications across the areas that rely on it, making the database schema for a legacy system too rigid for change.
- Technology Stack: The various technologies used in a code base
Technology is usually the first thing selected for a system. Therefore, legacy systems always contain older or obsolete technologies, which are often more difficult for developers to work with. Obsolete technologies are typical impediments for development productivity, as there is little knowledge or tools available in the market as for modern technologies. Even worse, old technologies themselves become unsupported over time, which can expose big risks for business continuity.
- Evolution: The changes made to a code base over time
In legacy systems, it’s difficult to make a change in one area without affecting another, making it difficult for multiple teams to work independently. This co-evolving behavior should be avoided, as not being able to isolate change is one of the biggest obstacles for development productivity.
- Knowledge: The degree to which knowledge of a code base is distributed among team members
In working with our clients, we see a great deal of large legacy systems highly dependent on just one or two key developers. This means huge risks for business continuity; if these key developers leave or retire, sufficient knowledge of the systems will cease to exist.
A new model to automate risk measurements
The dependencies between these risks must be identified and quantified. Those measurements, however, are highly labor intensive and too tall of an order for most organizations.
For that reason, we’ve automated the measurements of all six of these aspects as part of a new architecture quality model. This model is based on the existing SIG methodology for standardized, repeatable measurements of source code using the ISO framework for software product quality.
The model’s results provide for modernization scenarios – such as rebuilding, refactoring, or re-platforming. They also consider code replacements and what to (potentially) keep. Even more importantly, the results can be used to produce detailed recommendations per scenario about which system modules to address first. Each scenario must be supported by cost estimates and a comparison between maintenance efforts in the first year and those thereafter.
The demands of the new digital transformation economy are calling on organizations to stop relying on inflexible software and modernize their core technologies. But the successful road to transformation is incremental and paved with facts about the system’s architecture quality.
Only when these six most critical architecture risks are made clear upfront and in detail, linked both to code and component levels, can the modernization program be executed with granular focus, highest productivity, and lowest costs. Our team is ready and available to create your fact-based, risk-managed roadmap.
Read part 1 in this series: Legacy Modernization: The art of choosing the right scenario
About the Authors
Lijin Zheng is a Consultant at SIG specializing in legacy modernization.
Michel van Dorp is Vice President of Strategic Partnerships at SIG, working with global consulting and technology firms to execute on efficient, cost-effective legacy modernization programs.