pytest-codspeed documentation
A pytest
plugin for benchmarking performance of Python code
Overview
pytest-codspeed
is a pytest plugin for measuring and tracking performance of Python code. It provides benchmarking capabilities with support for both wall-time and CPU instrumentation measurements.
Installation
Example Usage
Command Line Options
Enable the CodSpeed benchmarking plugin for the test session.
(This is automatically enabled when running under the CodSpeed runner or from the GitHub action)
The measurement instrument to use for measuring performance.
auto
: Automatically select the measurement instrument based on the environment.instrumentation
: Use the CPU simulation instrument.walltime
: Use the wall-clock time instrument. Automatically enabled on macro runners.
The time to warm up the benchmark for (in seconds), only for walltime mode.
The maximum time to run a benchmark for (in seconds), only for walltime mode.
The maximum number of rounds to run a benchmark for, only for walltime mode.
All of these walltime-specific command line options can be overridden by more specific settings set by the benchmark marker.
For example, if you set warmup_time
in the benchmark marker, it will take precedence
over the --codspeed-warmup-time
command line option.
Creating Benchmarks
There are multiple ways to mark tests as benchmarks at different levels:
The pytest.mark.benchmark
marker
Marking a test with the pytest.mark.benchmark
marker will automatically mark it as a
benchmark. This means that the entire test function will be measured. For more
fine-grained control, see the benchmark fixture section.
You can also mark all the tests contained in a test file as benchmarks by using the
pytestmark
variable at the module level.
The benchmark
fixture
When more fine-grained control is needed, the benchmark
fixture can be used. This
fixture is exposed by the pytest-codspeed
plugin, allowing to select exactly the code
to be measured.
A fixture is a function that can be used to set up and tear down the state of a test. More information about fixtures can be found in the pytest documentation.
Direct invocation
The fixture can be used directly in the test function:
The fixture behaves as an identity function: calling benchmark(target, *args, **kwargs)
will have the same effect as calling target(*args, **kwargs)
. The return value will
also be passed along to make it possible to write assertions on the result.
For example:
It’s also possible to use it with lambda functions:
As a decorator
If you want to measure a block of code containing multiple function calls, you can use the fixture as a decorator:
When using the fixture, the marker is not necessary anymore, except if you want to customize the execution.
The benchmark fixture can only be used once per test function.
For example, the following code will raise an error:
Benchmark options
The @pytest.mark.benchmark
marker accepts several options to customize the benchmark execution:
The group name to use for the benchmark. This can be useful to organize related benchmarks together.
(Will be supported soon in the UI)
The minimum time of a round (in seconds). Only available in walltime mode.
The maximum time to run the benchmark for (in seconds). Only available in walltime mode.
The maximum number of rounds to run the benchmark for. Takes precedence over max_time. Only available in walltime mode.
Example usage:
The min_time
, max_time
and max_rounds
options are only available in walltime mode.
When using instrumentation mode (Valgrind), these options are ignored.
Pedantic mode (advanced)
For fine-grained control over benchmark execution protocol, you can use the
benchmark.pedantic
method.
For example:
The benchmark.pedantic
method accepts the following parameters:
The function to benchmark. This is the main code that will be measured.
Positional arguments to pass to the target function.
Keyword arguments to pass to the target function.
Optional setup function that runs before each round. If it returns a tuple of (args, kwargs), these will be passed to the target function.
Optional teardown function that runs after each round. Receives the same arguments as the target function.
Number of warmup rounds to run before the actual benchmark. These rounds are not included in the measurements.
Number of rounds to run the benchmark for.
Number of iterations to run within each round. The total number of executions will be .
Recipes
Parametrized benchmarks
pytest-codspeed
fully supports pytest’s parametrization out of the box:
Compatibility
pytest-codspeed
is designed to be fully backward compatible with
pytest-benchmark.
You can use both plugins in the same project, though only one will be active at a time.
Running the benchmarks continuously
To run the benchmarks continuously in your CI, you can use the pytest-codspeed
along with the CodSpeed runner.
We have first-class support for the following CI providers:
If your provider is not listed here, please open an issue or contact us on Discord.