To integrate CodSpeed with your Python codebase, the simplest way is to
pytest-codspeed. This
extension will automatically enable the CodSpeed engine on your benchmarks and
allow reporting to CodSpeed.
Creating benchmarks with pytest-codspeed is backward compatible with the
pytest-benchmark API. So if you already have benchmarks written with it, you
can start using CodSpeed right away!
In a nutshell, pytest-codspeed offers two approaches to create performance
benchmarks that integrate seamlessly with your existing test suite.Use @pytest.mark.benchmark to measure entire test functions automatically:
Copy
Ask AI
import pytestfrom statistics import median@pytest.mark.benchmarkdef test_median_performance(): input = [1, 2, 3, 4, 5] output = sum(i**2 for i in input) assert output == 55
Since this measure the entire function, you might want to use the benchmark
fixture for precise control over what code gets measured:
Copy
Ask AI
def test_mean_performance(benchmark): data = [1, 2, 3, 4, 5] # Only the function call is measured result = benchmark(lambda: sum(i**2 for i in data)) assert result == 55
Check out the full documentation for more details:
If you want to run the benchmarks tests locally, you can use the --codspeed
pytest flag:
Copy
Ask AI
$ pytest tests/ --codspeed======================== test session starts =========================platform linux -- Python 3.10.4, pytest-7.1.3, pluggy-1.0.0codspeed: 1.0.4NOTICE: codspeed is enabled, but no performance measurement will bemade since it's running in an unknown environment.rootdir: /home/user/codspeed-test, configfile: pytest.iniplugins: codspeed-1.0.4collected 6 itemstests/test_iterative_fibo.py . [ 16%]tests/test_recursive_fibo.py .. [ 50%]tests/test_recursive_fibo_cached.py ... [100%]========================= 6 benchmark tested ================================================== 6 passed in 0.02s =========================
Running pytest-codspeed locally will not produce any performance reporting.
It’s only useful for making sure that your benchmarks are working as expected.
If you want to get performance reporting, you should run the benchmarks in
your CI.
To generate performance reports, you need to run the benchmarks in your CI. This
allows CodSpeed to automatically run benchmarks and warn you about regressions
during development.
If you want more details on how to configure the CodSpeed action, you can check
out the Continuous Reporting section.
Here is an example of a GitHub Actions workflow that runs the benchmarks and
reports the results to CodSpeed on every push to the main branch and every
pull request:
.github/workflows/codspeed.yml
Copy
Ask AI
name: CodSpeed Benchmarkson: push: branches: - "main" # or "master" pull_request: # `workflow_dispatch` allows CodSpeed to trigger backtest # performance analysis in order to generate initial data. workflow_dispatch:permissions: # optional for public repositories contents: read # required for actions/checkout id-token: write # required for OIDC authentication with CodSpeedjobs: benchmarks: name: Run benchmarks runs-on: ubuntu-latest steps: - uses: actions/checkout@v5 # ... # Setup your environment here: # - Configure your Python/Rust/Node version # - Install your dependencies # - Build your benchmarks (if using a compiled language) # ... - name: Run the benchmarks uses: CodSpeedHQ/action@v4 with: mode: instrumentation run: <Insert your benchmark command here>
Then add the following GitHub Actions workflow to run the benchmarks:
.github/workflows/codspeed.yml
Copy
Ask AI
name: CodSpeed Benchmarkson: push: branches: - "main" # or "master" pull_request: # `workflow_dispatch` allows CodSpeed to trigger backtest # performance analysis in order to generate initial data. workflow_dispatch:permissions: # optional for public repositories contents: read # required for actions/checkout id-token: write # required for OIDC authentication with CodSpeedjobs: benchmarks: name: Run benchmarks runs-on: ubuntu-latest steps: - uses: actions/checkout@v5 # ... # Setup your environment here: # - Configure your Python/Rust/Node version # - Install your dependencies # - Build your benchmarks (if using a compiled language) # ... - name: Run the benchmarks uses: CodSpeedHQ/action@v4 with: mode: instrumentation run: <Insert your benchmark command here>
Using actions/setup-python to install python and not uv install is
critical for tracing to work properly.
To parallelize your benchmarks, you can use
pytest-test-groups, a
pytest plugin that allows you to split your benchmark execution across several
CI jobs.Install pytest-test-groups as a development dependency:
Copy
Ask AI
uv add --dev pytest-test-groups
Update your CI workflow to run benchmarks shard by shard:
Copy
Ask AI
jobs: benchmarks: name: Run benchmarks runs-on: ubuntu-latest steps: - uses: actions/checkout@v5 # ... # Setup your environment here: # - Configure your Python/Rust/Node version # - Install your dependencies # - Build your benchmarks (if using a compiled language) # ... - name: Run the benchmarks uses: CodSpeedHQ/action@v4 with: mode: instrumentation run: <Insert your benchmark command here>
The shard number must starts at 1. If you run with a shard number of 0, all
the benchmarks will be run.
Same benchmark with different variationsFor now, you cannot run the same benchmarks several times within the same run.
If the same benchmark is run multiple times, you will receive the following
comment on your pull request:
If you cannot split your benchmarks across multiple CI jobs, you can split them
across multiple processes in the same job. We only recommend this as an
alternative to the parallel CI jobs setup.pytest-codspeed is compatible with
pytest-xdist, a pytest plugin
allowing to distribute the execution across multiple processes. You can simply
enable the pytest-xdist plugin on top of pytest-codspeed. This will allow
you to run your benchmarks in parallel using multiple processes.First, install pytest-xdist as a development dependency:
Copy
Ask AI
uv add --dev pytest-xdist
Then, you can run your benchmarks in parallel with the pytest-xdist flag:
Copy
Ask AI
pytest tests/ --codspeed -n auto
The change in the CI workflow would look like this:
.github/workflows/codspeed.yml
Copy
Ask AI
- name: Run benchmarks uses: CodSpeedHQ/action@v4 with: mode: instrumentation run: pytest tests/ --codspeed -n auto
It’s possible to use pytest-codspeed with
Nox, a Python automation tool that allows
you to automate the execution of Python code across multiple environments.Here is an example configuration file to run benchmarks with pytest-codspeed
using Nox:
To use it with Github Actions, you can use the following workflow:
.github/workflows/codspeed.yml
Copy
Ask AI
name: CodSpeed Benchmarkson: push: branches: - "main" # or "master" pull_request: # `workflow_dispatch` allows CodSpeed to trigger backtest # performance analysis in order to generate initial data. workflow_dispatch:permissions: # optional for public repositories contents: read # required for actions/checkout id-token: write # required for OIDC authentication with CodSpeedjobs: benchmarks: name: Run benchmarks runs-on: ubuntu-latest steps: - uses: actions/checkout@v5 # ... # Setup your environment here: # - Configure your Python/Rust/Node version # - Install your dependencies # - Build your benchmarks (if using a compiled language) # ... - name: Run the benchmarks uses: CodSpeedHQ/action@v4 with: mode: instrumentation run: <Insert your benchmark command here>
Splitting the virtualenv installation and the execution of the benchmarks is
optional. Though this allows to speed up the execution of the benchmarks since
the dependencies will be installed or compiled without the instrumentation
enabled.