The Software Quality Governance Framework.

Software Quality Governance is the automated identification, management, and control of every Quality Risk across the entire end-to-end delivery pipeline, for every single software change.

In today’s enterprises, DevOps is changing the way that software is delivered. Software underpins digital transformation projects and these projects can span dozens of builds a day, countless tools for supporting the delivery pipelines, handled by multiple personas, and all within extremely complex integrated systems.

The three primary challenges are: complexity, velocity, and visibility:

Without a holistic solution to these three issues, organizations will not be able to protect the integrity of production and ensure that quality risks are mitigated as early as possible. Software Quality Governance is what organizations need to truly take data-driven decisions, resulting in zero defects to production.

The Software Quality Maturity Model

SeaLights developed a maturity model to help enterprises benchmark their software quality environment, and understand where they are on their journey towards Software Quality Governance.

The Software Quality Maturity Model

An organization can start implementing Software Quality Governance at any stage. However, as organizations proceed to stages 3, 4, and 5, Software Quality Governance becomes a necessity driven by the need to make data-driven decisions to meet the complexity and speed of change.

In today’s market, this is the state of the art of quality management. Quality engineers exist and are working with product teams, DevOps, QA leadership, and operations to improve and measure quality practices.

Testing practices: A high level of test automation, with granular metrics showing actual test quality—how effective are the existing tests at verifying quality risks. Based on these metrics, quality engineers can derive intelligence about the real quality of software projects across the application portfolio.

State of Software Quality Governance: Organizations at this stage need Software Quality Governance in order to interpret the vast amount of data generated from complex applications and large automated test suites. However, the data is not easily available and is not standardized across the organization. Leadership can define more granular quality policies such as “X% of software functionality must be covered by tests”. QA management can focus testers on activity that will improve software quality in production, and actually measure this improvement.

Organizations at this stage can achieve “closed-loop” visibility across the SDLC, tying specific decisions made at the planning stage or specific code introduced at the development stage to quality issues in production. They rely on advanced quality intelligence tooling to quantify risk across the software portfolio and make data-driven decisions.

Testing practices: Testing is standardized across the organization, with clear metrics showing to what extent each software project is tested, and what are its key quality risks. At all stages of the SDLC, teams can make data-driven decisions based on software quality parameters. Most importantly, development and testing efforts are focused on the features that impact the organization’s users, based on live production metrics.

State of Software Quality Governance: With rich data on quality and advanced quality engineering practices, organizations at this stage of maturity can enforce policies like “all releases must meet X level of quality risk before being pushed to production”. Automated quality gates ensure that software components are only promoted further in the SDLC if they meet quality criteria.

We consider this the “holy grail” of enterprise quality management. Organizations at this advanced stage can deploy automatically to production multiple times per day based on quality-driven criteria. Instead of a Change Advisory Board manned by a human committee, they adopt an “auto-CAB” mechanism, which can approve even major changes touching multiple software projects, based on cross-organizational quality data. This approach to Data-Driven Change Approval (DDCA) allows risk scoring algorithms to process the vast amount of available Quality Risk data and automatically approve change promotion when the Quality Risk score is compliant with “acceptable risk” policies.

Testing practices: Testing is fully automated and auto-correcting, with minimal need for manual maintenance of test suites. Testing resources focus their time on predicting future software defects based on Quality Risks, production data and change history, and taking pre-emptive action to prevent the next production fault.

State of Software Quality Governance: The entire organization views quality metrics on one dashboard, and quality decisions can be enforced across the software portfolio at the click of a button.  Software Quality Governance tooling makes autonomous data-driven decisions based on a plethora of risk data and historical performance to allow automated promotion of acceptable changes or tighten governance gates to require manual intervention where the risk of the change is computed as too high.

In today’s market, this is the state of the art of quality management. Quality engineers exist and are working with product teams, DevOps, QA leadership, and operations to improve and measure quality practices.

Testing practices: A high level of test automation, with granular metrics showing actual test quality—how effective are the existing tests at verifying quality risks. Based on these metrics, quality engineers can derive intelligence about the real quality of software projects across the application portfolio.

State of Software Quality Governance: Organizations at this stage need Software Quality Governance in order to interpret the vast amount of data generated from complex applications and large automated test suites. However, the data is not easily available and is not standardized across the organization. Leadership can define more granular quality policies such as “X% of software functionality must be covered by tests”. QA management can focus testers on activity that will improve software quality in production, and actually measure this improvement.

Organizations at this stage can achieve “closed-loop” visibility across the SDLC, tying specific decisions made at the planning stage or specific code introduced at the development stage to quality issues in production. They rely on advanced quality intelligence tooling to quantify risk across the software portfolio and make data-driven decisions.

Testing practices: Testing is standardized across the organization, with clear metrics showing to what extent each software project is tested, and what are its key quality risks. At all stages of the SDLC, teams can make data-driven decisions based on software quality parameters. Most importantly, development and testing efforts are focused on the features that impact the organization’s users, based on live production metrics.

State of Software Quality Governance: With rich data on quality and advanced quality engineering practices, organizations at this stage of maturity can enforce policies like “all releases must meet X level of quality risk before being pushed to production”. Automated quality gates ensure that software components are only promoted further in the SDLC if they meet quality criteria.

We consider this the “holy grail” of enterprise quality management. Organizations at this advanced stage can deploy automatically to production multiple times per day based on quality-driven criteria. Instead of a Change Advisory Board manned by a human committee, they adopt an “auto-CAB” mechanism, which can approve even major changes touching multiple software projects, based on cross-organizational quality data. This approach to Data-Driven Change Approval (DDCA) allows risk scoring algorithms to process the vast amount of available Quality Risk data and automatically approve change promotion when the Quality Risk score is compliant with “acceptable risk” policies.

Testing practices: Testing is fully automated and auto-correcting, with minimal need for manual maintenance of test suites. Testing resources focus their time on predicting future software defects based on Quality Risks, production data and change history, and taking pre-emptive action to prevent the next production fault.

State of Software Quality Governance: The entire organization views quality metrics on one dashboard, and quality decisions can be enforced across the software portfolio at the click of a button.  Software Quality Governance tooling makes autonomous data-driven decisions based on a plethora of risk data and historical performance to allow automated promotion of acceptable changes or tighten governance gates to require manual intervention where the risk of the change is computed as too high.

Software Quality as an Integral Part of
Digital Transformation

In today’s enterprises, digital transformation initiatives are increasingly complex and fast-paced. Software underpins digital transformation projects and these projects can span hundreds of applications, each potentially using different technology stacks, some built as legacy monoliths, and some composed of dozens or even hundreds of microservices.

Increasingly, digital transformation initiatives are managed using complex delivery frameworks with high development velocity, and an uncompromising focus on delivering value to customers and business users.

Software quality rarely takes a front seat in enterprise development projects. But it is at the back of everybody’s minds. Nobody can afford to ignore quality, and it’s clear that software quality can make or break any digital transformation project.

However, with high complexity and increasingly faster development cycles, consistent high-quality is difficult to achieve. Software quality is becoming a business risk, as well as an opportunity for forward-looking enterprises.

A Framework for Software Quality Governance

Software Quality Governance can have tremendous benefits for digital transformation initiatives. But where can organizations start? Below we define a structured process for implementing Software Quality Governance, and a detailed plan for implementing governance principles into an existing development process.

The Software Quality Governance Process

The Software Quality Governance process focuses on the concept of Quality Risks.

A Quality Risk is a measurable aspect of a software component, indicating that the component is at a higher probability of experiencing a defect that will impact users.

The Software Quality Governance Proces

Four Tiers of Software Quality Governance

The Software Quality Governance process operates at four organizational tiers. We’ll explore each tier, from top level management down to individual development and quality teams.

The organization’s leadership must be aware of the importance of quality, and must define metric-driven policies in the following areas:

  • Change management: defining what types of changes require what levels of quality effort and control.
  • Regulatory compliance: defining which regulations or standards affect different parts of the software portfolio, and which quality criteria are required to mitigate regulatory risk.
  • Acceptable risk: defining which parts of the software portfolio can withstand different levels of risk. For example, mission critical systems may have a strict uptime SLA, while other systems may have more tolerance for production defects.
  • Scope: defining the exact scope of each Software Governance Policy in terms of the exact software projects and components affected.
Quality gates are the operational mechanism that allows management to enforce its quality policies across the organization. A quality gate recognizes a pull request or deployment request that can impact a production system, identifies which policies are relevant to the software component, and enforces a go-no-go decision based on standardized quality metrics and computational risk scoring

The preceding two tiers rely on rich data about the software change, change complexity, testing activity and underlying quality of all software components being developed across the enterprise, including live production insights. This data can be collected from production systems and across the SDLC and analyzed using AI and big data techniques. The end goal is to connect testing and other risk mitigation activity and quality metrics to business requirements as well as cross-organizational policies.

At the level of individual development and QA teams, granular quality metrics must be available, to enable data-driven quality decisions. From the outset of a development project, teams can orient themselves to the quality goals set by the organization:

  • Dev teams should have access to change history, code complexity, and code quality metrics and focus on continuously improving these.
  • Testing and DevOps teams should focus on true test coverage across all testing levels (not merely unit testing code coverage), test quality, and the frequency of test cycles.
  • Operations and SRE teams should focus on known technical debt in production systems, unused code, and a history of defects and errors that can be used to predict future quality risks.

The organization’s leadership must be aware of the importance of quality, and must define metric-driven policies in the following areas:

  • Change management: defining what types of changes require what levels of quality effort and control.
  • Regulatory compliance: defining which regulations or standards affect different parts of the software portfolio, and which quality criteria are required to mitigate regulatory risk.
  • Acceptable risk: defining which parts of the software portfolio can withstand different levels of risk. For example, mission critical systems may have a strict uptime SLA, while other systems may have more tolerance for production defects.
  • Scope: defining the exact scope of each Software Governance Policy in terms of the exact software projects and components affected.
Quality gates are the operational mechanism that allows management to enforce its quality policies across the organization. A quality gate recognizes a pull request or deployment request that can impact a production system, identifies which policies are relevant to the software component, and enforces a go-no-go decision based on standardized quality metrics and computational risk scoring

The preceding two tiers rely on rich data about the software change, change complexity, testing activity and underlying quality of all software components being developed across the enterprise, including live production insights. This data can be collected from production systems and across the SDLC and analyzed using AI and big data techniques. The end goal is to connect testing and other risk mitigation activity and quality metrics to business requirements as well as cross-organizational policies.

At the level of individual development and QA teams, granular quality metrics must be available, to enable data-driven quality decisions. From the outset of a development project, teams can orient themselves to the quality goals set by the organization:

  • Dev teams should have access to change history, code complexity, and code quality metrics and focus on continuously improving these.
  • Testing and DevOps teams should focus on true test coverage across all testing levels (not merely unit testing code coverage), test quality, and the frequency of test cycles.
  • Operations and SRE teams should focus on known technical debt in production systems, unused code, and a history of defects and errors that can be used to predict future quality risks.

Instrumenting the Enterprise for Software Quality Governance with SeaLights