In today’s market, this is the state of the art of quality management. Quality engineers exist and are working with product teams, DevOps, QA leadership, and operations to improve and measure quality practices.
Testing practices: A high level of test automation, with granular metrics showing actual test quality—how effective are the existing tests at verifying quality risks. Based on these metrics, quality engineers can derive intelligence about the real quality of software projects across the application portfolio.
State of Software Quality Intelligence: Organizations at this stage need Software Quality Intelligence in order to interpret the vast amount of data generated from complex applications and large automated test suites. However, the data is not easily available and is not standardized across the organization. Leadership can define more granular quality policies such as “X% of software functionality must be covered by tests”. QA management can focus testers on activity that will improve software quality in production, and actually measure this improvement.
Organizations at this stage can achieve “closed-loop” visibility across the SDLC, tying specific decisions made at the planning stage or specific code introduced at the development stage to quality issues in production. They rely on advanced quality intelligence tooling to quantify risk across the software portfolio and make data-driven decisions.
Testing practices: Testing is standardized across the organization, with clear metrics showing to what extent each software project is tested, and what are its key quality risks. At all stages of the SDLC, teams can make data-driven decisions based on software quality parameters. Most importantly, development and testing efforts are focused on the features that impact the organization’s users, based on live production metrics.
State of Software Quality Intelligence: With rich data on quality and advanced quality engineering practices, organizations at this stage of maturity can enforce policies like “all releases must meet X level of quality risk before being pushed to production”. Automated quality gates ensure that software components are only promoted further in the SDLC if they meet quality criteria.
We consider this the “holy grail” of enterprise quality management. Organizations at this advanced stage can deploy automatically to production multiple times per day based on quality-driven criteria. Instead of a Change Advisory Board manned by a human committee, they adopt an “auto-CAB” mechanism, which can approve even major changes touching multiple software projects, based on cross-organizational quality data. This approach to Data-Driven Change Approval (DDCA) allows risk scoring algorithms to process the vast amount of available Quality Risk data and automatically approve change promotion when the Quality Risk score is compliant with “acceptable risk” policies.
Testing practices: Testing is fully automated and auto-correcting, with minimal need for manual maintenance of test suites. Testing resources focus their time on predicting future software defects based on Quality Risks, production data and change history, and taking pre-emptive action to prevent the next production fault.
State of Software Quality Intelligence: The entire organization views quality metrics on one dashboard, and quality decisions can be enforced across the software portfolio at the click of a button. Software Quality Intelligence tooling makes autonomous data-driven decisions based on a plethora of risk data and historical performance to allow automated promotion of acceptable changes or tighten governance gates to require manual intervention where the risk of the change is computed as too high.