To integrate CodSpeed with your Python codebase, the simplest way is to use our pytest extension: pytest-codspeed. This extension will automatically enable the CodSpeed engine on your benchmarks and allow reporting to CodSpeed.

Creating benchmarks with pytest-codspeed is the same as with the pytest-benchmark API. So if you already have benchmarks written with it, you can start using CodSpeed right away πŸš€

Installation

First, install pytest-codspeed as a development dependency:

uv add --dev pytest-codspeed

Usage

Creating benchmarks

Marking a whole test function as a benchmark with pytest.mark.benchmark

import pytest
from statistics import median

@pytest.mark.benchmark
def test_median_performance():
    return median([1, 2, 3, 4, 5])

Benchmarking selected lines of a test function with the benchmark fixture

import pytest
from statistics import mean, median

def test_mean_performance(benchmark):
    # Precompute some data useful for the benchmark but that should not be
    # included in the benchmark time
    data = [1, 2, 3, 4, 5]

    # Benchmark the execution of the function
    benchmark(lambda: mean(data))


def test_mean_and_median_performance(benchmark):
    # Precompute some data useful for the benchmark but that should not be
    # included in the benchmark time
    data = [1, 2, 3, 4, 5]

    # Benchmark the execution of the function:
    # The `@benchmark` decorator will automatically call the function and
    # measure its execution
    @benchmark
    def bench():
        mean(data)
        median(data)

Testing the benchmarks locally

If you want to run the benchmarks tests locally, you can use the --codspeed pytest flag:

$ pytest tests/ --codspeed
======================== test session starts =========================
platform linux -- Python 3.10.4, pytest-7.1.3, pluggy-1.0.0
codspeed: 1.0.4
NOTICE: codspeed is enabled, but no performance measurement will be
made since it's running in an unknown environment.
rootdir: /home/user/codspeed-test, configfile: pytest.ini
plugins: codspeed-1.0.4
collected 6 items

tests/test_iterative_fibo.py .                                  [ 16%]
tests/test_recursive_fibo.py ..                                 [ 50%]
tests/test_recursive_fibo_cached.py ...                         [100%]

========================= 6 benchmark tested =========================
========================= 6 passed in 0.02s  =========================

Running pytest-codspeed locally will not produce any performance reporting. It’s only useful for making sure that your benchmarks are working as expected. If you want to get performance reporting, you should run the benchmarks in your CI.

Running the benchmarks in your CI

To generate performance reports, you need to run the benchmarks in your CI. This allows CodSpeed to detect the CI environment and properly configure the environment.

If you want more details on how to configure the CodSpeed action, you can check out the Continuous Reporting section.

Here is an example of a GitHub Actions workflow that runs the benchmarks and reports the results to CodSpeed on every push to the main branch and every pull request:

.github/workflows/codspeed.yml
name: CodSpeed

on:
  push:
    branches:
      - "main" # or "master"
  pull_request:
  # `workflow_dispatch` allows CodSpeed to trigger backtest
  # performance analysis in order to generate initial data.
  workflow_dispatch:

jobs:
  benchmarks:
    name: Run benchmarks
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: "3.13"

      - name: Install dependencies
        run: pip install -r requirements.txt

      - name: Run benchmarks
        uses: CodSpeedHQ/action@v3
        with:
          token: ${{ secrets.CODSPEED_TOKEN }}
          run: pytest tests/ --codspeed

Recipes

Usage with uv

Install uv as a development dependency:

uv add --dev pytest-codspeed

Then add the following GitHub Actions workflow to run the benchmarks:

.github/workflows/codspeed.yml
name: CodSpeed

on:
  push:
    branches:
      - "main" # or "master"
  pull_request:
  # `workflow_dispatch` allows CodSpeed to trigger backtest
  # performance analysis in order to generate initial data.
  workflow_dispatch:

jobs:
  benchmarks:
    name: Run benchmarks
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install uv
        uses: astral-sh/setup-uv@v5

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version-file: "pyproject.toml"

      - name: Install dependencies
        run: uv sync --all-extras --dev

      - name: Run benchmarks
        uses: CodSpeedHQ/action@v3
        with:
          run: uv run pytest tests/ --codspeed
          token: ${{ secrets.CODSPEED_TOKEN }}

Using actions/setup-python to install python and not uv install is critical for tracing to work properly.

Running benchmarks in parallel

If your benchmarks are taking too much time to run under the CodSpeed action, you can run them in parallel to speed up the execution.

Running benchmarks in parallel CI jobs

To parallelize your benchmarks, you can use pytest-test-groups, a pytest plugin that allows you to split your benchmark execution across several CI jobs.

Install pytest-test-groups as a development dependency:

uv add --dev pytest-test-groups

Update your CI workflow to run benchmarks shard by shard:

jobs:
  benchmarks:
    name: Run benchmarks
    runs-on: ubuntu-latest
    strategy:
      matrix:
        shard: [1,2,3,4,5]
    steps:
      - uses: actions/checkout@v4

      - name: Install uv
        uses: astral-sh/setup-uv@v5

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version-file: "pyproject.toml"

      - name: Install dependencies
        run: uv sync --all-extras --dev

      - name: Run benchmarks
        uses: CodSpeedHQ/action@v3
        with:
          run: uv run pytest tests/ --codspeed --test-group=${{ matrix.shard }} --test-group-count=5
          token: ${{ secrets.CODSPEED_TOKEN }}

The shard number must starts at 1. If you run with a shard number of 0, all the benchmarks will be run.

Same benchmark with different variations

For now, you cannot run the same benchmarks several times within the same run. If the same benchmark is run multiple times, you will receive the following comment on your pull request:

Learn more about benchmark sharding and how to integrate with your CI provider.

Running benchmarks in parallel processes

If you cannot split your benchmarks across multiple CI jobs, you can split them across multiple processes in the same job. We only recommend this as an alternative to the parallel CI jobs setup.

pytest-codspeed is compatible with pytest-xdist, a pytest plugin allowing to distribute the execution across multiple processes. You can simply enable the pytest-xdist plugin on top of pytest-codspeed. This will allow you to run your benchmarks in parallel using multiple processes.

First, install pytest-xdist as a development dependency:

uv add --dev pytest-xdist

Then, you can run your benchmarks in parallel with the pytest-xdist flag:

pytest tests/ --codspeed -n auto

The change in the CI workflow would look like this:

.github/workflows/codspeed.yml
- name: Run benchmarks
  uses: CodSpeedHQ/action@v3
  with:
    token: ${{ secrets.CODSPEED_TOKEN }}
    run: pytest tests/ --codspeed -n auto

Usage with Nox

It’s possible to use pytest-codspeed with Nox, a Python automation tool that allows you to automate the execution of Python code across multiple environments.

Here is an example configuration file to run benchmarks with pytest-codspeed using Nox:

noxfile.py
import nox

@nox.session
def codspeed(session):
    session.install('pytest')
    session.install('pytest-codspeed')
    session.run('pytest', '--codspeed')

You can then run the benchmarks:

nox --sessions codspeed

To use it with Github Actions, you can use the following workflow:

.github/workflows/codspeed.yml
name: CodSpeed

on:
  push:
    branches:
      - "main" # or "master"
  pull_request:
  # `workflow_dispatch` allows CodSpeed to trigger backtest
  # performance analysis in order to generate initial data.
  workflow_dispatch:

jobs:
  benchmarks:
    name: Run benchmarks
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: "3.13"

      - name: Install Nox
        run: pip install nox

      - name: Install dependencies
        run: nox --sessions codspeed --install-only

      - name: Run the action
        uses: CodSpeedHQ/action@v3
        with:
          run: nox --sessions codspeed --reuse-existing-virtualenvs --no-install
          token: ${{ secrets.CODSPEED_TOKEN }}

Splitting the virtualenv installation and the execution of the benchmarks is optional. Though this allows to speed up the execution of the benchmarks since the dependencies will be installed or compiled without the instrumentation enabled.