Requirements
Python
: 3.9 and later
pytest
: any recent version
Installation
uv add --dev pytest-codspeed
Usage
Creating benchmarks
Creating benchmarks with pytest-codspeed
is compatible with the standard pytest-benchmark
API. So if you already have benchmarks written with it, you can start using pytest-codspeed
right away.
Marking a whole test function as a benchmark with pytest.mark.benchmark
import pytest
from statistics import median
@pytest.mark.benchmark
def test_median_performance():
return median([1, 2, 3, 4, 5])
Benchmarking selected lines of a test function with the benchmark
fixture
import pytest
from statistics import mean
def test_mean_performance(benchmark):
# Precompute some data useful for the benchmark but that should not be
# included in the benchmark time
data = [1, 2, 3, 4, 5]
# Benchmark the execution of the function
benchmark(lambda: mean(data))
def test_mean_and_median_performance(benchmark):
# Precompute some data useful for the benchmark but that should not be
# included in the benchmark time
data = [1, 2, 3, 4, 5]
# Benchmark the execution of the function:
# The `@benchmark` decorator will automatically call the function and
# measure its execution
@benchmark
def bench():
mean(data)
median(data)
Running benchmarks
Testing the benchmarks locally
If you want to run only the benchmarks tests locally, you can use the --codspeed
pytest flag:
In your CI
You can use the CodSpeedHQ/action to run the benchmarks in Github Actions and upload the results to CodSpeed.
Example workflow:
name: CodSpeed
on:
push:
branches:
- "main" # or "master"
pull_request:
jobs:
benchmarks:
name: Run benchmarks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run benchmarks
uses: CodSpeedHQ/action@v3
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: pytest tests/ --codspeed