Writing Benchmarks with criterion.rs
Using the Criterion.rs compatibility layer for CodSpeed
Installation
For all Rust integrations, you will need
the cargo-codspeed
command to build and run your
CodSpeed benchmarks
Install the
criterion.rs
compatibility layer:
Or directly change your Cargo.toml
if you already have criterion
installed:
This will install the codspeed-criterion-compat
crate and rename it to
criterion
in your Cargo.toml
. This way, you can keep your existing imports
and the compatibility layer will take care of the rest.
Using the compatibility layer won’t change the behavior of your benchmark suite outside of the CodSpeed instrumentation environment and Criterion.rs will still run it as usual.
If you prefer, you can also install codspeed-criterion-compat
as is and change
your imports to use this new crate name.
Usage
Creating benchmarks
As an example, let’s follow the example from the Criterion.rs documentation: a benchmark suite for the Fibonacci function:
The last step in creating the Criterion benchmark is to add the new benchmark
target in your Cargo.toml
:
And that’s it! You can now run your benchmark suite with CodSpeed
Testing the benchmarks locally
Congrats ! 🎉 You can now run those benchmark in your CI to get the actual performance measurements 👇.
Running the benchmarks in your CI
To generate performance reports, you need to run the benchmarks in your CI. This allows CodSpeed to detect the CI environment and properly configure the instrumented environment.
If you want more details on how to configure the CodSpeed action, you can check out the Continuous Reporting section.
Here is an example of a GitHub Actions workflow that runs the benchmarks and
reports the results to CodSpeed on every push to the main
branch and every
pull request:
Recipes
Running benchmarks in parallel CI jobs
With Rust, if you use multiple packages, a first sharding optimization is to split your benchmarks across these packages.
For example, using Github Actions:
For more information about multiple packages, check the cargo-codspeed docs.
With Criterion, there is currently no way to split your benchmarks automatically, but you can use a filter expression (view docs).
Same benchmark with different variations
For now, you cannot run the same benchmarks several times within the same run. If the same benchmark is run multiple times, you will receive the following comment on your pull request:
Learn more about benchmark sharding and how to integrate with your CI provider.