pytest-codspeed. This
extension will automatically enable the CodSpeed engine on your benchmarks and
allow reporting to CodSpeed.
Installation
First, installpytest-codspeed as a development dependency:
Usage
Creating benchmarks
In a nutshell,pytest-codspeed offers two approaches to create performance
benchmarks that integrate seamlessly with your existing test suite.
Use @pytest.mark.benchmark to measure entire test functions automatically:
benchmark
fixture for precise control over what code gets measured:
pytest-codspeed documentation
Explore advanced features including pedantic mode, benchmark options, and
configuration parameters for fine-tuned performance testing.
Testing the benchmarks locally
If you want to run the benchmarks tests locally, you can use the--codspeed
pytest flag:
Running
pytest-codspeed locally will not produce any performance reporting.
It’s only useful for making sure that your benchmarks are working as expected.
If you want to get performance reporting, you should run the benchmarks in
your CI.Running the benchmarks in your CI
To generate performance reports, you need to run the benchmarks in your CI. This allows CodSpeed to automatically run benchmarks and warn you about regressions during development. Here is an example of a GitHub Actions workflow that runs the benchmarks and reports the results to CodSpeed on every push to themain branch and every
pull request:
Recipes
Usage with uv
Install uv as a development dependency:
Using
actions/setup-python to install python and not uv install is
critical for tracing to work properly.Running benchmarks in parallel
If your benchmarks are taking too much time to run under the CodSpeed action, you can run them in parallel to speed up the execution.Running benchmarks in parallel CI jobs
To parallelize your benchmarks, you can usepytest-test-groups, a
pytest plugin that allows you to split your benchmark execution across several
CI jobs.
Install pytest-test-groups as a development dependency:
Update your CI workflow to run benchmarks shard by shard:
Same benchmark with different variationsFor now, you cannot run the same benchmarks several times within the same run.
If the same benchmark is run multiple times, you will receive the following
comment on your pull request:

Running benchmarks in parallel processes
If you cannot split your benchmarks across multiple CI jobs, you can split them across multiple processes in the same job. We only recommend this as an alternative to the parallel CI jobs setup.pytest-codspeed is compatible with
pytest-xdist, a pytest plugin
allowing to distribute the execution across multiple processes. You can simply
enable the pytest-xdist plugin on top of pytest-codspeed. This will allow
you to run your benchmarks in parallel using multiple processes.
First, install pytest-xdist as a development dependency:
Then, you can run your benchmarks in parallel with the pytest-xdist flag:
.github/workflows/codspeed.yml
To combine measurement modes like simulation and memory, check out the
documentation on running multiple instruments
serially.
Usage with Nox
It’s possible to usepytest-codspeed with
Nox, a Python automation tool that allows
you to automate the execution of Python code across multiple environments.
Here is an example configuration file to run benchmarks with pytest-codspeed
using Nox:
noxfile.py