Writing benchmarks in Python
Creating performance tests for pytest
using pytest-codspeed
To integrate CodSpeed with your Python codebase, the simplest way is to use our
pytest
extension:
pytest-codspeed
. This extension
will automatically enable the CodSpeed engine on your benchmarks and allow
reporting to CodSpeed.
Creating benchmarks with pytest-codspeed
is the same as with the
pytest-benchmark
API. So if you already have benchmarks written with it, you
can start using CodSpeed right away π
Installation
First, install pytest-codspeed
as a development dependency:
Usage
Creating benchmarks
Marking a whole test function as a benchmark with pytest.mark.benchmark
Benchmarking selected lines of a test function with the benchmark
fixture
Testing the benchmarks locally
If you want to run the benchmarks tests locally, you can use the --codspeed
pytest flag:
Running pytest-codspeed
locally will not produce any performance reporting.
Itβs only useful for making sure that your benchmarks are working as expected.
If you want to get performance reporting, you should run the benchmarks in your
CI.
Running the benchmarks in your CI
To generate performance reports, you need to run the benchmarks in your CI. This allows CodSpeed to detect the CI environment and properly configure the environment.
If you want more details on how to configure the CodSpeed action, you can check out the Continuous Reporting section.
Here is an example of a GitHub Actions workflow that runs the benchmarks and
reports the results to CodSpeed on every push to the main
branch and every
pull request:
Recipes
Usage with uv
Install uv
as a development dependency:
Then add the following GitHub Actions workflow to run the benchmarks:
Using actions/setup-python
to install python and not uv install
is critical
for tracing to work properly.
Running benchmarks in parallel
If your benchmarks are taking too much time to run under the CodSpeed action, you can run them in parallel to speed up the execution.
Running benchmarks in parallel CI jobs
To parallelize your benchmarks, you can use
pytest-test-groups
, a
pytest
plugin that allows you to split your benchmark execution across several
CI jobs.
Install pytest-test-groups
as a development dependency:
Update your CI workflow to run benchmarks shard by shard:
The shard number must starts at 1. If you run with a shard number of 0, all the benchmarks will be run.
Same benchmark with different variations
For now, you cannot run the same benchmarks several times within the same run. If the same benchmark is run multiple times, you will receive the following comment on your pull request:
Learn more about benchmark sharding and how to integrate with your CI provider.
Running benchmarks in parallel processes
If you cannot split your benchmarks across multiple CI jobs, you can split them across multiple processes in the same job. We only recommend this as an alternative to the parallel CI jobs setup.
pytest-codspeed
is compatible with
pytest-xdist
, a pytest
plugin
allowing to distribute the execution across multiple processes. You can simply
enable the pytest-xdist
plugin on top of pytest-codspeed
. This will allow
you to run your benchmarks in parallel using multiple processes.
First, install pytest-xdist
as a development dependency:
Then, you can run your benchmarks in parallel with the pytest-xdist
flag:
The change in the CI workflow would look like this:
Usage with Nox
Itβs possible to use pytest-codspeed
with
Nox
, a Python automation tool that allows
you to automate the execution of Python code across multiple environments.
Here is an example configuration file to run benchmarks with pytest-codspeed
using Nox
:
You can then run the benchmarks:
To use it with Github Actions, you can use the following workflow:
Splitting the virtualenv installation and the execution of the benchmarks is optional. Though this allows to speed up the execution of the benchmarks since the dependencies will be installed or compiled without the instrumentation enabled.