Test Metrics.

The SeaLights test metrics guide
for better and faster CI/CD

Python Code Coverage

Python is one of the most popular programming languages available currently. Due to its general purpose nature, Python finds application in various software development use cases ranging from simple scripting to web servers to frameworks like Django.

However, as with other programming languages, testing remains one of the most important aspects of software development in Python.

In this post, we’ll see few tools to measure and write test case coverage for Python code.


Python standard library is prebuilt with a module named unittest. Inspired by JUnit and other unit testing frameworks from major programming languages, unittest is a testing framework that lets you automate tests, setup shared setup and shutdown code for tests and more.

One of the important features of unittest is test fixture, i.e. the setup needed to run one or more tests, and any associated cleanup actions. With text fixture, activities, like creating temporary or proxy databases, directories, or starting a server process, can be taken care of at a single location.

Let us take few sample test cases and see how they are implement using unittest:

import unittest

class TestStringMethods(unittest.TestCase):

def test_upper(self):
self.assertEqual(‘foo’.upper(), ‘FOO’)

def test_isupper(self):

def test_split(self):
s = ‘hello world’
self.assertEqual(s.split(), [‘hello’, ‘world’])
# check that s.split fails when the separator is not a string
with self.assertRaises(TypeError):

if __name__ == ‘__main__’:

Create a test case by subclassing unittest.TestCase. Then you can define individual tests with methods. Note that the test case names should start with the word test. This naming convention informs the test runner about which methods represent tests.

test runner is the component which orchestrates the execution of tests and provides the outcome to the user. It’s implementation varies and it may use a graphical interface, a textual interface, or return a special value to indicate the results of executing the tests.


Pytest is a third party Python testing framework. It aims to provide a framework to write your tests efficiently, by remove all the overheads for creating tests. Pytest is supported by Python 2.7, 3.4, 3.5, 3.6, Jython, PyPy-2.3 on Unix/Posix and Windows.

Let’s take a look at how to get started with Pytest.First, download the latest version and install it via pip:

pip install -U pytest

You can check version installed by:

pytest –version

Now, let’s take a sample function and related test case.

# content of test_sample.py
def func(x):
return x + 1def test_answer():
assert func(3) == 5

The function func takes an input x and returns a value x + 1. In the test case, we assert if the function func takes input 3 and returns 5. This test case is expected to fail. To run the test simply type pytest in the directory that has the file test_sample.py.

$ pytest
=========================== test session starts ============================
platform linux — Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 itemtest_sample.py F                                                     [100%]================================= FAILURES =================================
_______________________________ test_answer ________________________________

def test_answer():
>       assert func(3) == 5
E       assert 4 == 5
E        +  where 4 = func(3)

test_sample.py:5: AssertionError
========================= 1 failed in 0.12 seconds =========================


Coverage.py is one of the most popular code coverage tools for Python. It uses code analysis tools and tracing hooks provided in Python standard library to measure coverage. It runs on major versions of CPython, PyPy, Jython and IronPython. You can use Coverage.py with both unittest and Pytest.

You can either download the latest version of Coverage and install it manually, or you can install it with pip as:

$ pip install coverage

Now run your program with coverage as

$ coverage run my_program.py arg1 arg2

Next to get coverage data, execute

$ coverage report -m

Here is a sample coverage data output

$ coverage report -m
Name                      Stmts   Miss  Cover   Missing
my_program.py                20      4    80%   33-35, 39
my_other_module.py           56      6    89%   17-23
TOTAL                        76     10    87%

Generate report in html file:

$ coverage html

This is a sample HTML report as generated by Coverage.py. It includes Module, statements, missing, excluded, branches, partial and coverage.


Pytest-cov is a Python plugin to generate coverage reports. In addition to functionalities supported by coverage command, it also supports centralized and distributed testing. It also supports coverage of subprocesses.

Once you have written test cases as required with Pytest, you can use Pytest-cov to run all the tests and report the coverage.

To get started, install Pytest-cov as:

$ pip install pytest-cov

Centralised testing

Centralised testing reports the combined coverage of the main process and all of its subprocesses. You can run centralized testing using,

$ py.test –cov= tests/

Distributed testing

Install Pytest-xdist, for distributed system support:

$ pip install pytest-xdist

Distributed testing can be done in two modes, dist set to load and each. When set to load, the combined coverage of all slaves is reported while distributed testing. The slaves may be spread out over any number of hosts and each slave may be located anywhere on the file system. Each slave will have its sub-processes measured.

For distributed testing in each mode, pytest-cov also report on the combined coverage of all slaves. Since each slave is running all tests this allows generating a combined coverage report for multiple environments


In this post, we learned about various Python unit testing and code coverage frameworks. While these frameworks make the testing process easy, you still need to write the tests. Here are a few best practices to consider while writing unit tests.

  • Use long, descriptive names. This often obviates the need for doctrines in test methods.
  • Tests should be isolated. Don’t interact with a real database or network. Use a separate test database that gets torn down or uses mock objects.
  • Focus on one tiny bit of functionality.
  • Should be fast, but a slow test is better than no test.
  • It often makes sense to have one test case class for a single class or model.