I am writing integration tests for a project in which I am making HTTP calls and testing whether they were successful or not.
Since I am not importing any module and not calling functions directly coverage.py report for this is 0%.
I want to know how can I generate coverage report for such integration HTTP request tests?
In simple words, Code Coverage includes Testing runtime code running, while Integration Tests consists of analyzing the performance of connected applications. Keep in mind, it is possible That you have already integrated various code scripts and need to perform a Code Coverage process.
To calculate the code coverage percentage, simply use the following formula: Code Coverage Percentage = (Number of lines of code executed by a testing algorithm/Total number of lines of code in a system component) * 100.
Test coverage is defined as a technique which determines whether our test cases are actually covering the application code and how much code is exercised when we run those test cases. If there are 10 requirements and 100 tests created and if 90 tests are executed then test coverage is 90%.
The recipe is pretty much this:
Example:
Imagine you have a dummy backend server that responds with a "Hello World" page on GET requests:
# backend.py
from http.server import BaseHTTPRequestHandler, HTTPServer
class DummyHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-Type', 'text/html')
self.end_headers()
self.wfile.write('<html><body><h1>Hello World</h1></body></html>'.encode())
if __name__ == '__main__':
HTTPServer(('127.0.0.1', 8000), DummyHandler).serve_forever()
A simple test that makes an HTTP request and verifies the response contains "Hello World":
# tests/test_server.py
import requests
def test_GET():
resp = requests.get('http://127.0.0.1:8000')
resp.raise_for_status()
assert 'Hello World' in resp.text
# tests/conftest.py
import os
import signal
import subprocess
import time
import coverage.data
import pytest
@pytest.fixture(autouse=True)
def run_backend(cov):
# 1.
env = os.environ.copy()
env['COVERAGE_FILE'] = '.coverage.backend'
serverproc = subprocess.Popen(['coverage', 'run', 'backend.py'], env=env,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setsid)
time.sleep(3)
yield # 2.
# 3.
serverproc.send_signal(signal.SIGINT)
time.sleep(1)
# 4.
backendcov = coverage.data.CoverageData()
with open('.coverage.backend') as fp:
backendcov.read_fileobj(fp)
cov.data.update(backendcov)
cov
is the fixture provided by pytest-cov
(docs).
Running the test adds the coverage of backend.py
to the overall coverage, although only tests
selected:
$ pytest --cov=tests --cov-report term -vs
=============================== test session starts ===============================
platform linux -- Python 3.6.5, pytest-3.4.1, py-1.5.3, pluggy-0.6.0 --
/data/gentoo64/usr/bin/python3.6
cachedir: .pytest_cache
rootdir: /data/gentoo64/home/u0_a82/projects/stackoverflow/so-50689940, inifile:
plugins: mock-1.6.3, cov-2.5.1
collected 1 item
tests/test_server.py::test_GET PASSED
----------- coverage: platform linux, python 3.6.5-final-0 -----------
Name Stmts Miss Cover
------------------------------------------
backend.py 12 0 100%
tests/conftest.py 18 0 100%
tests/test_server.py 5 0 100%
------------------------------------------
TOTAL 35 0 100%
============================ 1 passed in 5.09 seconds =============================
With Coverage 5.1, based on the "Measuring sub-processes" section of the coverage.py docs, you can set the COVERAGE_PROCESS_START
env-var, call the coverage.process_startup()
somewhere in your code. If you set parallel=True
in your .coveragerc
Somewhere in your process, call this code:
import coverage
coverage.process_startup()
This can be done in sitecustomize.py
globally, but in my case it was easy to add this to my application's __init__.py
, where I added:
import os
if 'COVERAGE_PROCESS_START' in os.environ:
import coverage
coverage.process_startup()
Just to be safe, I added an additional check to this if
statement (checking if MYAPP_COVERAGE_SUBPROCESS
is also set)
In your test case, set the COVERAGE_PROCESS_START
to the path to your .coveragerc
file (or an empty string if don't need this config), for example:
import os
import subprocess
env = os.environ.copy()
env['COVERAGE_PROCESS_START'] = '.coveragerc'
cmd = [sys.executable, 'run_my_app.py']
p = subprocess.Popen(cmd, env=env)
p.communicate()
assert p.returncode == 0 # ..etc
Finally, you create .coveragerc
containing:
[run]
parallel = True
source = myapp # Which module to collect coverage for
This ensures the .coverage files created by each process go to a unique file, which pytest-cov appears to merge automatically (or can be done manually with coverage combine
). It also describes which modules to collect data for (the --cov=myapp
arg doesn't get passed to child processes)
To run your tests, just invoke pytest --cov=
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With