The Need for Velocity with Eran Sher

Home/Blog/The Need for Velocity with Eran Sher

The Need for Velocity with Eran Sher

Bar KofmanDigital Marketing Manager | November 13, 2018

Accurate software analytics are necessary in today’s fast-paced software development and testing world. How can you make a conclusive decision without having a full picture of your testing and quality gaps?

Joe Colantonio interviewed Eran Sher, CEO and co-founder of SeaLights, on episode 224 of Test Talks where they discussed software quality analytics and why analytics are vital in order to increase your team’s velocity and efficiency.  Eran provided actionable tips on tracking and optimizing your testing efforts to increase your team’s velocity, without sacrificing quality.

Joe ColantonioWelcome back to Test Talks. Today, I’d like to talk about Software Quality Intelligence. You were on the show about a year ago in episode 147, what have you been up to and have there been any major changes to SeaLights since that time?

Eran SherIt has been a very interesting year.  We managed to speak with hundreds of leaders in software engineering, quality assurance, and devops. The main message from the market is clear: there is a need for velocity.  Software development is changing and most of the changes are around development velocity and testing velocity. This imposes a lot of new challenges on software development teams and on quality assurance teams.

Joe ColantonioAlong those lines, were there any features that you’ve added to the product that you thought, “Wow, there’s a really need for this and maybe we need to tweak something to fulfill those requests we’re getting to handle velocity?”

Eran SherWe added two new features to our product as a response to new challenges in the market. One of the main concerns in the market is a lack of efficiency; too many unnecessary tests are being developed   Developing, executing, and maintaining those unnecessary tests is a massive waste of a team’s time, money, and resources. The reason for this is that there is no clear data or report that can show you where your gaps are. In other words, there are no data analytics tools that can scale all of your builds, all of your tests–Unit, Integration, Component, API, Manual, Automated, etc.–and then provide you a full picture with the necessary data: where your testing gaps are, where your quality gaps are, where you need to develop tests, or where you don’t need to develop tests. This capability is called Test Gap Analytics

Joe ColantonioI should ask before we go into Test Gap Analytics, what is Software Quality Intelligence? That’s a term I’ve been seeing on your website-is that a different piece of the software or is that a higher level that also includes Test Gap Analytics?

Eran SherIt’s a higher level that also includes Test Gap Analytics. Software Quality Intelligence is a capability that we developed at SeaLights. The reality is that software development is becoming so fast and so widely distributed that new challenges are constantly arising.  The first challenge is that it is very hard to determine your software quality. While the second challenge is that it is very hard to determine which tests you need to execute, which is why people inefficiently execute all their tests all the time. This creates a bottleneck because you keep adding new tests all the time, especially in a continuous integration/continuous delivery environment. The third challenge is that it’s very hard to determine which tests you need to develop and which tests you don’t need to develop.

The concept of Quality Intelligence is the ability to use data analytics that provide you real-time data and offline data to answer those questions. It’s a platform that automatically generates and collects different data sources to provide you with a real-time view of your testing gaps–where you need to develop tests and where you don’t need to develop tests–and analyzes which tests you need to execute according to code changes in builds. Through data analytics, the platform helps you determine the quality of your release by comparing it to previous releases and looking at the metrics over time.

Basically, a Quality Intelligence Platform provides software teams with software quality analytics and insights. There are two major planning processes in organizations today; the first one is Sprint planning. About 99% of the people we spoke with said that Sprint planning is mainly focused on defining the user stories for the coming Sprint. Therefore, the planning of the quality activities or the test development activities are pretty straightforward. The automation engineers or the developers who need to develop the functional tests and the integration tests–not just the unit tests–are told to develop tests according to those user stories. The “definition of done” would be that these new tests for those user stories would pass successfully.

However, we at SeaLights found that there is no way to actually verify that those tests were developed properly; that there were no code changes that were done as part of the Sprint, or that those tests were not actually testing it at all. Software teams don’t have this type of visibility.

The second planning process is a vital one that no one is actually overseeing–handling the quality technical debt. Everyone knows that they have testing gaps, but they don’t know exactly where those gaps are. Once we introduce the Test Gap Analytics, the scrum masters, the product owner, the dev team leader, and the test architect can all be part of the Sprint planning. They can use the Test Gap Analytics report to understand the high-risk gaps, and then decide if they’re going to develop specific tests in that specific Sprint to start closing the gap.

Joe ColantonioI have a team with a lot of Sprint teams, 8-10 Sprint teams, and many times even when they’re working on the same features, they don’t know how the other person is going to affect the part they’re working on. Is this something that would help people know that they may have requirements and have tests for those requirements, but they may not know what they don’t know? For example, a lot of times when you’re creating code, you don’t realize that maybe another piece was touched that you didn’t plan for, even though you think you’re actually testing the requirements that you had ahead of time.

Eran SherIt does. If you generate the Test Gap Analytics, you generate two-data points; you can see the last Sprint end result and understand all the code changes that happened in the past Sprint and are seeing what was not tested. That would be that retrospective that you’re talking about, now the data is visible and you can use what you have found in order to plan the next Sprint.

Joe ColantonioWhat are some other challenges you’ve seen when people try to implement Software Quality Intelligence?

Eran SherThe first challenge, as I mentioned, is understanding the gaps. The second challenge is actually a change in mindset: we’re starting to see that development team leaders and software engineering team leaders are being tasked with quality assessment. Software engineering team leaders don’t really know what that is because they don’t come from a software quality assurance background, they come from a software development background, and now they need to manage quality automation engineers. It’s extremely difficult for them to validate the code, the test results, or the tests themselves–that is if they’re actually testing the code that is being changed. Therefore, this is another major change in how Sprint planning and the retrospective are actually done. If the data is now accessible and visible, software teams can understand how to better plan their testing activities, as well as how to validate that they were done properly.

Joe ColantonioThis is useful when a team is assessing how good their tests are or the software test quality?

Eran SherYes, because now everything is moving very, very fast. We have lots of code changes done by different people, across different microservices. Velocity creates new challenges and you need to get the data in order to better plan your tests. Think about how tests are being planned today; the planning starts by specification or requirement. You start writing or developing the tests according to the functional spec but then after the test is done, a week later, the code of the tested application can be changed dozens of times. Is your test still valid? You don’t know. You need data to help you validate it.

Joe ColantonioHow does this help testing velocity? Does it tell you that you’re wasting time running these tests, or that this test has no value so that you’re able to have a targeted set of tests that really are testing the changes that are being checked in by your developers?

Eran SherYes, that’s one capability which we call Test Gap Analytics. This helps you analyze code changes in a specific release, and then match or select which tests are suitable or have the high probability to find bugs according to the code change. Imagine a tool that can analyze all your tests and lets you know what’s not tested. For example, say you ran hundreds of thousands of unit tests, automated tests, manual tests, component test, and so on. After all those test executions, what’s still not tested across all the tests? If software teams can get this data as well as correlate it with all the code changes that happened in the past month, or in the past two weeks, they will get a view of all the code changes that happened and see which code is not covered by any of the test cases. By using this data, teams will be able to develop tests according to the most recent and relevant code changes.

Additionally, R&D and QA Managers gain two main things with this data. The first thing is now they can see if and where there are untested code changes and they will know right away that it increases the chances of having a high quality issue. Therefore, if a team develops tests in those areas then they are immediately going to increase their quality, or decrease their quality risks. The second thing is now R&D and QA Managers have a way to direct all of their high quality tests: regression tests, integration tests, end to end tests, and functional tests. If they can direct those tests to areas that their developers are changing frequently, then they are actually implementing a very robust infrastructure as a regression safety net, because those integration tests are now focused on test areas that their developers are changing frequently. And if they have no integration tests for those areas then they are allowing their developers to change the code and deploy very rapidly to production because their integration tests and their end to end tests are now covering that, and that’s what’s enabling velocity.

Joe ColantonioAwesome. A lot of times even when you have good dashboards it’s hard to get the team’s to pay attention to them. So, in this type of solution, how does all this information bubble up? Do you have to do a lot of analysis or are you able to look at a certain tab to know if you need more integration tests in this sort of component?

Eran SherSeaLights developed a dashboard that shows all the builds, components, and microservices, so managers can look at the last build and see exactly how it was tested and what’s the status. For Sprint planning, SeaLights developed a simple report that shows all the gaps that exist. The report takes around 10 to 15 minutes to review. Software Managers can review the report during Sprint planning with their team, so everyone can understand their gaps, and then, based on that, they can make a decision regarding which tests they need to develop for the next Sprint. Through Test Gap Analytics, within a few Sprints, managers will be able to close those important and risky gaps, enabling their team to move faster and more efficiently.

Today, people look at quality and try to tie it to coverage level. For example, they say “my unit test needs to be at 60%” or “I want to see my coverage increase from 40% to 80%” and that’s an indication for quality. Generally speaking, it’s an indication but we all know that coverage is not the most important metric. The way SeaLights approaches this is that you can be at 30% code coverage but still have high quality. How? Because you might have 30% code coverage on the most important high-risk areas. Due to that approach, SeaLights customers’ are changing their mindset from looking only at coverage, to gaining a better understanding of how to maintain a low risk testing technical debt and quality technical debt. The key here is not to close the gap, rather to maintain it in a low risk mode, and that way software teams can be more efficient.

Joe ColantonioAwesome. That’s a great point because a lot of times the coverage will still exist even if the code is dead or unused in the product, and it still shows as being covered. So, does this allow you to know that you may have a lot of code coverage but it happens to be in dead code or in features that users don’t even use?

Eran SherYes, the Test Gap Analytics report shows the current test gap areas and correlates it with the recent code changes. This way you can see if you have tests in code areas that are not changing at all, or if you don’t have tests in code areas that have been recently changed. The report can also show you if your critical files, core calculations, or APIs don’t have coverage at all,  if you have gaps in your end to end tests, or gaps in your component tests. You can slice and dice the data for each test stage.

Joe ColantonioCool. So, I think we may have covered this in episode 147, but for the folks that haven’t heard…you mentioned end to end testing a few times and most people are familiar with code coverage for unit tests. But could you tell us again how you actually get coverage for UI functional tests? Is that more of a model that someone needs to come up with beforehand? Or is it actually smart enough to know that you have these Selenium tests mapped to this piece of the application?

Eran SherIt’s not a functional spec coverage. It’s not coverage in the way that QA teams or testers are used to. Everyone is moving and transitioning to automation. QA engineers today are programmers and actually, what we provide them is test code coverage. So, when you’re running End to End Tests, UI Tests, and Functional Tests–most of which can start in the UI and then go to the backend and across microservices–we know how to capture the actual code that was executed. Similar to how developers today know their unit test coverage, we developed a software that shows every QA and automation engineer the coverage of all their tests. After it’s deployed and testing is running in a complex environment–across microservices, distributed servers, and other services–you can get the actual code coverage, both for automatic and manual tests.

Joe ColantonioCool. So, I know most people listening are probably very familiar with their CI systems against statistics about the unit test coverage, pass/fail rate, all that type of information, but where I see a gap is finding that exact information for manual tests or functional tests. Where I see that gap is with the functional Selenium tests. So, does your technology also work with functional tests using technologies like Selenium?

Eran SherYes, the technology that we developed is allowing developers, QA engineers, and automation engineers to see their actual code coverage for any test, not just unit tests. Think about having the ability to see your code coverage for any test that is running–functional tests, component tests, performance tests, and even manual tests–no matter how complex the deployment or the configuration. You may have a complex deployment that’s running a test through a variety of microservices and distributed servers, but with our technology, you will know your actual code coverage across all your components.

Joe ColantonioAwesome. The machines are instrumented and collecting data, does it do anything else besides coverage? Performance metrics or any other functionality that’s included that people may not be aware of?

Eran SherToday, if you want to know what your tests are covering, you need to run all your tests on a single build and then you will understand what you’re covering, or not covering.

SeaLights is collecting lots of data, including test statuses, test duration, and code changes. We are collecting this data over time to show smart correlations and trend analyses. We also correlate the test data, and the coverage data, along with information about the code changes.

Our technology allows you to collect this data over multiple builds. For example, you run a unit test on your first build. You may then run a manual test on the second build. You run a unit test on the third build, but the build fails, so only half of the tests have been executed. And then you run a functional test on the fourth build. We know how to read all of this data and, over a period of time, we understand exactly what you’re testing and what you’re not testing, without making any special changes to your processes.

Joe ColantonioAwesome. So, as I was researching I noticed the phrase the “test less” approach. It sounds like you’re testing less, but you’re actually testing more–the more critical things. Is that what the “test less” approach is all about?

Eran SherExactly. This approach means that you don’t need to develop too many tests, you need to develop the right amount of tests. And if you develop the right amount of tests, it means that you’re developing less tests, but that you are actually testing more. When you develop less tests, it means you’re developing much faster because you have less tests to execute, so you’re improving the execution time, the testing cycle time, and you have less tests to maintain. The one condition is you’re writing less tests, but they have a higher impact.

Joe ColantonioI think that’s awesome because a lot of times I’ve also seen people to have the tests cases keep growing and growing and growing and they never prune it. So, it sounds like using this technology you could say, okay, maybe two years ago these tests were valid, but now they’re not adding any value, we can just get rid of them.

Like I said, you were on last year and since then you have spoken to a lot of different users and developers. Is there one core thing you learned during that year that made you change the direction, or is it just that you kept hearing the same thing over and over again about these test quality gaps and that’s what made you start focusing more on that aspect?

Eran SherWe kept hearing about the quality gaps. Originally, we were focused on the release quality analytics in real-time, and then we understood that there was a big need for data analytics that would help development teams and quality assurance teams with their planning activities. This is the big addition: data analytics that help teams with Sprint planning, with quality technical debt planning, integration, end to end test development planning, and so on.

Joe ColantonioWhat do you think the new role will be for software quality managers as we go forward with AI/Machine Learning technology, where we’re getting all these quality analytics and we need to make decisions based on that?

Eran SherIt’s a good question. Actually, we’ve seen in a lot of organizations that the QA manager is transitioning into a new role. Automation engineers are now being managed by software development managers, and the QA manager is responsible for standards. We think that by using data analytics, the role of the QA manager can be improved. When QA Managers use data analytics they can direct their teams by telling them where the gaps are and how to be more efficient. They can figure out how to develop tests that will create a regression safety net which will allow developers to move faster.

So, using data analytics, a QA manager can make more of an impact in the organization and provide a quality infrastructure that will help development teams and automation teams to be more efficient with test development. They can help the teams develop the right tests for the actual important gaps, and use data analytics to assess the quality of the release. There is a new level of responsibility and a higher potential impact that QA managers can now make in their organizations.

Joe ColantonioYes, I really see it empowering the QA managers and also helping them deliver quicker and faster. Where I work we have a process called VNV where we actually put it into a blessed environment by our QA people and we run the test. Some of the tests may fail and then we need the developer teams to get out of this Sprint and start debugging why it’s failing, and a lot of times that test has nothing to do with that particular release. In this case you’re only running what matters, so when it does fail you know that this is high priority. Then the QA manager has real power to say “we need the developers’ time because this is an actual issue.”

Eran SherYes, absolutely. AI and Machine Learning test automation software, which is mainly focused on generating new tests, is great, but if you combine it with the software analytics data that we provide, then those tests can be directed to develop tests in only the right areas, as oppose to wasting time on too many tests that you don’t need. So, you will gain from both worlds.

Joe ColantonioEran, before we go, is there one piece of actual advice you can give someone to help improve their Software Quality Intelligence and let us know the best way to find or learn about the solution?

Eran SherMy best advice, not just for software quality, is use data. It’s much easier to make decisions when you have the data–and you will make better decisions. For more information, you can also visit our website www.sealights.io. There are a lot of resources there, and white papers, and lots of webinars, on quality in general, on software development, on tests metrics, on software quality metrics, and more.


SeaLights, a Quality Intelligence Platform used by leading software teams, provides a test gap analysis report that supplies important missing data. It is the best way to plan an effective sprint that simultaneously  focuses on the development of new features, while still addressing quality issues.

The test gap analysis report helps teams focus their testing efforts, by highlighting high-risk code areas, including:

  1. Code that has recently changed
  2. Code that is not tested by any part of the product’s test suite (unit tests, integration tests, UI automation tests, etc)
  3. The most important code areas

By leveraging this report, teams can focus on developing integration tests only in the most important parts of the product, and set a smarter build breaking criteria, significantly boosting sprint productivity.

To learn more, read our white paper about the missing metric for software development teams.

Request a Demo