The SeaLights test metrics guide
for better and faster CI/CD
Software Quality Metrics: Selecting the Right Metrics for Any Project
What is Software Quality?
Software quality measures whether or not software fulfills its functional and nonfunctional requirements. In Agile frameworks, requirements are called user stories.
- Functional requirements—what the software needs to do, which may include user-facing functionality, back-end calculations, external integrations and database manipulation.
- Nonfunctional requirements—how the system is meant to work, which may include privacy, security, disaster recovery, and usability.
Just because a user can perform a task using the software – for example, login or view some required information – that doesn’t mean the software is of high quality. We have all used software that fulfilled its basic function, but we were still not satisfied with it.
If a user is able to carry out the function they expected, the functional requirements are met. If the user is not happy with “how” these requirements are met, there is an issue with the nonfunctional requirements.
Aspects of Software Quality
Software quality can be defined along four dimensions:
- Reliability—the likelihood of failure and the stability of the software.
- Performance and efficiency—how an application consumes resources, and how this relates to customer satisfaction, scalability and response times.
- Security—how effectively an application guards information against software breaches. Security can be affected by software quality, code quality, and known vulnerabilities in software components included in the application, particularly open source.
- Maintainability and code quality—how easily you can change, adapt, and reuse software code. You can ensure software is maintainable by using good coding practices and complying with appropriate software architecture patterns.
What are Software Quality Metrics?
Metrics for software quality are concerned with the impact of project and process parameters on the quality of the end product. What happened in the project, what process was used and what were its metrics, and how did all this contribute to the quality experienced by end users?
We identify five types of quality metrics: metrics used in agile development environments, production metrics which measure how much effort is needed to produce software and how it runs in production; security response metrics; and, most importantly, a direct measure of customer satisfaction.
Top Software Quality Metrics
Agile process metrics pay particular attention to how agile teams plan and draw conclusions. These metrics offer a high-level understanding of the standard of software development processes.
- Lead time—measures the time a team takes to turn customer requests into working software. When a team minimizes lead time, the development process becomes more effective.
- Cycle time—measures the time it takes to modify the software and actually deploy specific changes to production.
- Agile velocity—measures the number of software units a team finishes in a sprint or iteration. The aim of the velocity metric is to determine the time needed to finish a specific number of story points and to map out future projects.
- Open/close rates—measured by tracking production issues that arise during a given time period, and how quickly they are resolved.
- Code quality—there are some quantitative measures of code quality, but they are not considered to be accurate. The best way to measure code quality is to conduct a qualitative peer review of the code. Code quality is tricky to measure, but has been shown to have a dramatic effect on software quality.
2. Production Metrics
These types of metrics measure how much work is done and determine the efficiency of software development teams.
- Active days—measures the time required for a software developer to add code to the software development project, excluding administrative and planning time.
- Assignment scope—the volume of code that a programmer can sustain and support in 1 year.
- Efficiency—calculates the amount of net new code a software developer creates, taking into account churn.
- Code churn—refers to the number of lines of code that were added, changed or deleted in a given time period. Code churn was traditionally considered to be negative, but in newer methodologies like Test Driven Development (TDD) high churn can be positive.
- Mean time between failures (MTBF) and mean time to recover/repair (MTTR)—measure how the software functions in a production environment.
- Application crash rate (ACR)—this is calculated by dividing the number of times an application fails (F) by the number of times it is used (U). ACR = F/U
3. Security Response Metrics
Security metrics are also a measure of software quality. Monitor these metrics over time to see how security, operations and development teams are responding to security issues, per application supported. Applications that have weaker security metrics may have underlying quality issues.
- Endpoint incidents—how many devices have been infected by a virus or other security threat in a specific period of time.
- Mean time to repair (MTTR)—the time between identification of a security incident and remediation.
4. Customer Satisfaction
Customer surveys are used to determine the overall quality of the product. There are numerous ways to measure customer satisfaction; one widely employed metric is the Net Promoter Score (NPS). It uses numbers range from -100 (indicating no customers will refer you to others) to +100 (all customers are likely to refer you to others).
Software Quality Intelligence: Towards a Holistic Measurement of Software Quality
We presented numerous measures of software quality, but none of them provide a holistic picture of software quality. Development teams need clear, actionable information about which parts of their products suffer from quality issues, and what they should do to improve.
A new category of tools called Software Quality Intelligence provides just that. For example, SeaLights is a quality intelligence platform that provides a visualization of test gaps. It analyzes usage of product features in production, code changes, and tests executed on specific functionality. Using this data it can determine which parts of your product are at quality risk—frequently used in production, have undergone recent changes, and not sufficiently covered by tests.
These types of insights can help you identify with precision exactly which parts of your product suffer from quality issues, and prioritize work in your sprints to close quality gaps.
Request a live demo to see how SeaLights can help you visualize and improve software quality.