End-to-end visibility into
every aspect of your testing.
If there’s one thing the market demands of both software development and QA managers, it’s velocity. But with the myriad code changes that come with shorter and shorter sprints comes a rising technical debt—that trade-off between developing rapidly and enforcing quality. That’s where software testing comes in: to ensure that new and changed code doesn’t negatively impact the reliability and production integrity of the applications it touches.
End-to-end visibility into
every aspect of your testing.
If there’s one thing the market demands of both software development and QA managers, it’s velocity. But with the myriad code changes that come with shorter and shorter sprints comes a rising technical debt—that trade-off between developing rapidly and enforcing quality. That’s where software testing comes in: to ensure that new and changed code doesn’t negatively impact the reliability and production integrity of the applications it touches.
Define and enforce your own quality standard.
With Release Quality Analysis Dashboard, you can define quality gates with specific conditions such as coverage, modified code coverage, number of quality risks, and whether any test stages failed. No more digging through test results to determine if your last build was actually good enough to promote to production. By setting up your own quality standards, you determine what constitutes success and failure, allowing you to block potential bugs from making it into production and enforce the level of quality your users expect.
Based on your quality gates and all the data SeaLights collects on your code, test coverage, and production usage, the Dashboard’s pass/fail provides you a true go-no go decision for every build.
Reduce costly production incidents.
Software production issues are expense. Every uncaught bug that makes it into production—especially one in frequently executed code—is likely to generate a support incident. Not only does it cost money for helpdesk support, it means developers have to stop working on the current sprint to investigate and fix the problem—slowing your team’s velocity. But production incidents that affect functionality and the customer experience can cost you even more than maintenance costs. They can damage your organization’s reputation with its users.
Release Quality Analysis Dashboard and its quality gates reduce the number and cost of production issues due to changed code by flagging each build as pass or fail.
Compares your current build to the last production quality one.
How does the quality risk in your current code compare with the last one you promoted to production? For each build, Release Quality Analysis Dashboard displays you a reference build—your last deployment to production—and the number of quality risks due to code changes it has identified since that reference build.
The reference build provides you a checkpoint against which the current quality risk analysis compares.
Drills down on quality risk for any type of test.
As with all SeaLights Quality Intelligence modules, Release Quality Analysis Dashboard operates on most any type of tests, from unit tests to component, regression, UI, integration and end-to-end tests. It will even analyze the effectiveness of manual tests.
The Dashboard allows you to drill down on any build for additional details by test stage/type. Simply click into the build to see the results for each test stage, including the test duration, total number of tests in that stage, number of failed and skipped tests, and the number of quality risks identified for the stage. For each stage, it also displays the percentage of overall code coverage and the current status of the quality gate.
Finally, the Dashboard calculates an aggregate coverage and quality risk analysis for the build, which accounts for all test types and their overlaps, to give an end-to-end picture of entire pipeline.
Analyzes your effective code coverage.
Most organizations way overestimate their test coverage, believing that their test suites cover 80 percent or more of their code base. Despite thousands of tests, most are surprised to learn their test coverage is far less than they imagine. But how would they really know?
For each build, Release Quality Analysis Dashboard performs a code coverage analysis. The dashboard displays the percentage of code covered by tests, as well as calculating a weighted average for overall code coverage. This gives you visibility into how effectively your entire testing pipeline is at identifying failures which would otherwise lead to production issues—and lets you know where you should concentrate your efforts.
Validates your test results and quality enforcement.
Providers of software for high-risk or mission critical applications are often required to prove their software has passed rather stringent quality gates before deployment to avoid potentially dangerous defects. For example, software that runs on medical devices, provides 24/7 security monitoring, or controls aeronautical flights, to name a few. These companies’ customers need to know that the software complies with the highest quality standards.
Release Quality Analysis Dashboard provides a full history of all builds, their quality risk ratings, test coverage, and the results of the quality gates set for each, as well as the date when each was (or was not) promoted to production.
Assists in transformations from manual to automated testing.
Many software organizations desperately want (and need) to automate more of their testing, but the truth is that they don’t know where to begin. They don’t really know which code is covered by manual tests today, nor (if they do) which of their existing manual tests should be automated first.
Release Quality Analysis Dashboard analyzes all your tests and code, then allows you to download a report showing all the code areas covered only by manual testing, right down to the method and when it was last changed. That way these software teams can implement and verify automated tests to replace the manual ones, so they can phase the old ones out.
Provides filters so each team sees only what it needs..
Even in small to medium size software organizations, different teams have different areas of responsibility. To be effective, each team needs to see only those quality risks in its own scope. Release Quality Analysis Dashboard lets you define team filters—such as agents, frontend, backend, integration and so on—so each team can view and can focus on only what is relevant to them. It allows you to define which applications and modules are of interest to which teams. (Even monolithic applications can be broken down into functional areas or components which are of interest to different teams.)
The Dashboard also allows you to filter the display by priority, code layers, functional areas, or any other tag you might want to define. That allows teams to focus on the quality risks in their area of scope, right down to the files involved.
Allows manual test definition right from the Dashboard.
As beneficial as automated testing can be, sometimes adding a quick, manual test is sufficient. Release Quality Analysis Dashboard provides a Chrome browser extension that allows users to create and name a manual test from within the browser, verify that test, and see its results appear in the test stage details. Users can also use the extension to create a variety of API tests from within the Dashboard.
Define and enforce your own quality standard.
With Release Quality Analysis Dashboard, you can define quality gates with specific conditions such as coverage, modified code coverage, number of quality risks, and whether any test