Or directly change your Cargo.toml if you already have divan installed:
Copy
Ask AI
[dev-dependencies]divan = { package = "codspeed-divan-compat", version = "*" }
This will install the codspeed-divan-compat crate and rename it to divan in
your Cargo.toml. This way, you can keep your existing imports and the
compatibility layer will take care of the rest.
Using the compatibility layer wonโt change the behavior of your benchmark suite
outside of the CodSpeed instrumentation environment and divan will still run
it as usual.
If you prefer, you can also install codspeed-divan-compat as is and change
your imports to use this new crate name.
$ cargo codspeed build Finished release [optimized] target(s) in 0.12s Finished built 1 benchmark suite(s)$ cargo codspeed run Collected 1 benchmark suite(s) to run Running my_benchmarkNOTICE: codspeed is enabled, but no performance measurement will be made since it's running in an unknown environment.Checked: benches/my_benchmark.rs::fibo_bench[1]Checked: benches/my_benchmark.rs::fibo_bench[2]Checked: benches/my_benchmark.rs::fibo_bench[4]Checked: benches/my_benchmark.rs::fibo_bench[8]Checked: benches/my_benchmark.rs::fibo_bench[16]Checked: benches/my_benchmark.rs::fibo_bench[32] Done running my_benchmark Finished running 1 benchmark suite(s)
To generate performance reports, you need to run the benchmarks in your CI. This
allows CodSpeed to automatically run benchmarks and warn you about regressions
during development.
If you want more details on how to configure the CodSpeed action, you can check
out the Continuous Reporting section.
Here is an example of a GitHub Actions workflow that runs the benchmarks and
reports the results to CodSpeed on every push to the main branch and every
pull request:
.github/workflows/codspeed.yml
Copy
Ask AI
name: CodSpeed Benchmarkson: push: branches: - "main" # or "master" pull_request: # `workflow_dispatch` allows CodSpeed to trigger backtest # performance analysis in order to generate initial data. workflow_dispatch:jobs: benchmarks: name: Run benchmarks runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Setup rust toolchain, cache and cargo-codspeed binary uses: moonrepo/setup-rust@v1 with: channel: stable cache-target: release bins: cargo-codspeed - name: Build the benchmark target(s) run: cargo codspeed build - name: Run the benchmarks uses: CodSpeedHQ/action@v4 with: mode: instrumentation run: cargo codspeed run token: ${{ secrets.CODSPEED_TOKEN }} # optional for public repos
Divan provides a lot of convenient features to help you write benchmars, below
is a selection that can be useful in CodSpeed benchmarks, but check out the
divan documentation for an exhaustive
list of features.
With Rust, if you use multiple packages, a first sharding optimization is to
split your benchmarks across these packages.For example, using Github Actions:
Copy
Ask AI
jobs: benchmarks: name: Run benchmarks runs-on: ubuntu-latest strategy: matrix: package: - my-first-package - my-second-package steps: - uses: actions/checkout@v4 - name: Setup rust toolchain, cache and cargo-codspeed binary uses: moonrepo/setup-rust@v1 with: channel: stable cache-target: release bins: cargo-codspeed - name: Build the benchmark target(s) run: cargo codspeed build -p ${{ matrix.package }} - name: Run the benchmarks uses: CodSpeedHQ/action@v4 with: mode: instrumentation run: cargo codspeed run # only runs the built benchmarks token: ${{ secrets.CODSPEED_TOKEN }} # optional for public repos
It is not required to pass a -p flag as only the benchmarks built by cargo codspeed build will be run.
Same benchmark with different variationsFor now, you cannot run the same benchmarks several times within the same run.
If the same benchmark is run multiple times, you will receive the following
comment on your pull request: