Clear metrics. Better decisions. Stronger defense.
Know how secure your software portfolio really is. Track your security posture against industry benchmarks and across different criticality levels, and assess how effectively your teams respond to findings.
Deeper insight. Smarter focus. Lower costs.
See where your time is going and monitor whether it’s paying off.
The dashboard highlights systems that consume excessive capacity, signals when technical debt is slowing you down, and helps prioritize refactoring efforts where they matter most.
Clear summaries. Less noise. Confident decisions.
Analyze what matters, without needing a software engineering degree.
Reporting on security, productivity, strategic progress, and IT spend is easier than ever. The Management Dashboard translates fact-based technical findings into clear business KPIs automatically, so everyone is on the same page and can move forward together.
“With the help of Software Improvement Group, and their platform Sigrid, we can invest more effectively in code quality improvement and development."
Software Improvement Group helped us keep control of technical debt and identify where we needed to focus.”
“Tooling like Sigrid provides transparency, allowing us to manage our software proactively and maintain high standards. This is crucial for securely sharing personal data in our digital processes and staying ahead of potential security risks.”
"Software Improvement Group helps us access and interpret the data so that we can improve things better and more quickly."
“It’s more visible what you’re talking about — more direct, not a fussy discussion. We can see what’s going wrong, what’s going good, and what needs more attention.”
Relevant resources
Executives can expect insights on security posture, productivity metrics, IT spend, technical debt, development activities, and progress towards defined objectives. These insights are translated into business KPIs that help in making informed, strategic decisions.
'Shift up' refers to elevating the focus from code-level details to a broader, enterprise-wide perspective. It emphasizes the importance of linking technical execution to strategic business outcomes, ensuring IT decisions are made in alignment with overall business goals.
The security tab in the Management Dashboard helps organizations evaluate and manage their security processes effectively. It answers three key questions:
SIG uses a Static Application Security Testing (SAST) model that ranks software systems from 1 to 5 stars. We evaluate system properties through thorough analysis of the source code and infrastructure, including reviewing the codebase and other artifacts. The scores for various system characteristics are then mapped to the OWASP Top 10, which identifies the ten most critical web application security risks.
For more information see our documentation.
Our 5-star rating reflects compliance with security best practices:
It's important to note that even a 5-star rating doesn't guarantee complete security but indicates that security considerations have been factored into design and implementation
To ensure teams stay on top of new security findings, Sigrid integrates with communication tools like Slack. This allows teams to receive automatic notifications of new findings in real-time, fostering a culture of security awareness and enabling prompt action to address issues as they arise.
For more information see our Gitlab page
The quality tab in the Management Dashboard helps organizations assess and improve software quality and maintainability. It answers three key questions:
To evaluate code quality, one typically requires contextual awareness. Each software piece may exist within a vastly different context. At SIG, we employ technology-agnostic source code analysis to assess quality and make comparisons to a benchmark. A benchmark is significant as it provides an impartial standard to gauge performance against. This benchmark is grounded in “the current state of the software development market,” allowing you to measure your source code against that of others.
For a comparison of various programming technologies, the metrics represent abstractions that exist universally, such as the quantity of code and the complexity of decision-making pathways. Thus, system size can be standardized to “person-months” or “person-years,” indicating the amount of developer effort completed over a specific time frame. These figures are again derived from benchmarks.
Sigrid assesses the analysis findings for your system against a benchmark that encompasses over 30,000 industry systems. This benchmark set is chosen and adjusted (rebalanced) annually to stay aligned with current software development trends. “Balanced” here refers to a representative distribution of the “system population,” covering everything from legacy technologies to contemporary JavaScript frameworks. In technological terms, this is weighted toward the most commonly used programming languages, as this best reflects the existing landscape. The metrics related to the benchmark approximate a normal distribution. This provides a verification of being a fair representation and enables statistical analysis on “the population” of software systems.
For more information, see our documentation
The evaluation score for code quality, when compared to this benchmark, is represented as a star rating ranging from 1 to 5 stars. It adheres to a distribution of 5%-30%-30%-30%-5%. Technically, the scoring metrics span from 0.5 to 5.5 stars. This is a convention meant to prevent a “0” rating, as zero lacks significance on a code quality scale. The central 30% lies between 2.5 and 3.5, with every score in this range designated as 3 stars, symbolizing the market average.
Despite the fact that 50% of systems will invariably score below the average (3.0), 35% will fall beneath the 3-star marker (below 2.5), and another 35% will exceed the 3-star mark (above 3.5). To mitigate the implication of extreme accuracy, it’s beneficial to perceive these star ratings as ranges; for example, a score of 3.4 stars would be interpreted as “within the expected range of market average, leaning toward the higher side.” It’s important to note that rounding calculations occur downwards, with a maximum precision of 2 decimal places. Therefore, a score of 1.49 stars will be rounded down to 1 star.
For more information, see our documentation
Our findings indicate that low maintainability in code (for instance, 2-star systems) can cause a 40% reduction in capacity for routine maintenance. Conversely, high maintainability (4-star systems) can increase capacity for innovation and improvement by up to 30%.
Objectives in Sigrid® are targets that can be set to compare against system status and quality trends. They are non-functional requirements that indicate where you want your systems to be in terms of various quality characteristics. Examples include desired maintainability, new code quality, minimum test code ratio, or maximum number of medium-risk vulnerabilities in libraries.
For more information, see our Portfolio objectives page
To set objectives in Sigrid®, you can select the Objectives tab from the menu bar. You can define portfolio objectives by clicking the “Add Portfolio Objective” button, which will guide you through configuring the objective using dropdown menus for capability, type, and value. You can then apply the objective to a group of systems based on shared metadata such as technology category, business criticality, lifecycle phase, or deployment type.
For more information, see our Portfolio objectives page