Manual testing is considered by many QA managers a necessary evil, especially so in the wake of test automation. We all want to automate, release better code faster and, beat our competition to market. But manual testing is anything but fast and organizations are right to phase it out and automate whatever can be automated. That said, some tests cannot and should not be automated.
So if manual testing is here to stay, in one form or another, the question you need to be asking isn’t how do I get rid of manual testing but, how do I turn manual testing into a value-added testing activity.
In this post, I’ll walk you through manual testing, give you a peak under the hood of Manual Testing Code Coverage and show you the Advantages of Manual Testing Code Coverage.
I Scream, You Scream, We All Hate Manual Testing
According to the latest World Quality Report (World Quality Report 2017-18 | Ninth Edition – Challenges in Test Automation, page 47) which interviewed thousands of influential managers, when asked: “What are the technical challenges you experience in developing applications?” 48% cited too much reliance on manual testing. The main reason that manual testing is viewed negatively by managers is speed. A manual test is considered a slow method of testing, that can be done only once the tested application is stable and, even that is done only on a single permutation out of the desired test’s environment matrix. As a result, managers feel that they are wasting expensive resources while progressing at a snail pace. One of the root causes for this is the inability to fully understand what contribution to application quality is achieved by manual tests; What exactly were testers able to cover? Were different flows employed or were there footprint overlaps? In other words, managers cannot see how efficient and effective their teams’ manual testing activities are.
Time to Dump Manual Testing?
The TL:DR here is that manual testing cannot just be dumped but, using it in the exact same way as before and expecting different results is insane Fun Fact: This para-quote isn’t by Einstein. The premise of the argument for “Manual Testing is Dead” is that the only reason we ever tested manually was that we couldn’t test automatically. Therefore, with the shift to Dockers, MicroServices, numerous form factors, Continuous Integrations/Delivery and automated testing the QA world dramatically changed and many mistakenly jumped on this bandwagon. While it’s true that automated testing has, is and, will eliminate much of the repetitive, labor-intensive spectrum of manual testing it has not, is not, will not and, quite simply cannot eliminate it entirely. To prove this right you only need to critically think about when, where and why you conduct manual testing.
There are two main reasons why you should still use manual testing:
- Automation will not achieve satisfactory results: This can be the result of many reasons such as technology challenges, lack of professionalism, poor tool selection, or simply because, in the end, a human tester can understand the end user better, has a better cognitive ability and, can make decisions based on risk assessment. So in some test cases, a human touch is still needed.
- Low return on investment: Any case, such as fringe use case, a complex validation situation or where automation is just too expensive to be worth the effort, compared to the potential benefits to the end user and risk reduction that will be gained through manual testing.
I recently read an interesting blog post by Steve Watson – part of ‘Musings of a Test Manager’ that highlight why manual tests are still relevant by pointing out the disadvantages of automated tests: “An automated test is only as good as the person who wrote it, and only as up to date as when it was last maintained. An automated test cannot make allowances for something that has changed. It cannot stop part-way through and think ‘I wonder what happens if I click this button rather than following the process flow’. It cannot look at the number of steps and highlight that the application sucks from a usability perspective. It cannot point out that the color scheme is unreadable, or that the company logo is the wrong color/shape/size etc. It can only run the steps it has been coded to do and validate against what it has been told to check for. So a test may pass, as the application displays what was expected, but what if additional text is present that shouldn’t be there? The test would pass, and unless anyone manually tested that screen, it would go undetected.”
The Importance of Manual Regression Test Code Coverage
Lean, Mean, Manual Testing Code Coverage
Manual Testings’ portion out of the testing pie has definitely decreased over the years but as I pointed out before, in no ways does that mean they are less important. Now that increasing emphasis is placed on automated testing, it is critical that managers be able to see and understand how to implement manual testing to enhance their existing automated test suite, and how to maximize value from these tests.
Our challenge at SeaLights was to build a manual testing code coverage tool, that completes our End2End real-time software quality analytics platform, and will:
- Not require a dedicated manual testing tool but, can extract code coverage from any tool you already have or from freestyle testing.
- Provide the exact coverage of not only what your manual tests achieved but, also the impact and contribution of these tests to code coverage on the application code level.
- Be a non-intrusive, easy to use tool that will report back its coverage analysis both to the user/tester and to our Quality Dashboard, used by R&D and QA managers.
And that’s exactly what we did. Starting from the beginning of 2018, when you look at the SeaLights dashboard you will see the aggregated coverage of all manual tests that run per build as well as separate information on each test delivered both to the manager and to the tester.
With manual testing code coverage managers can objectively measure the contribution of the manual tests to quality efforts. You now have the data that you need to improve effectiveness and efficiency by directing your testers precisely to high-risk code areas, uncovered code and, overlapping tests. With manual testing code coverage, manual testing goes from a necessary evil to a competitive edge.