Or directly change your Cargo.toml if you already have bencher installed:
Copy
Ask AI
[dev-dependencies]bencher = { package = "codspeed-bencher-compat", version = "*" }
If you prefer, you can also install codspeed-bencher-compat as is and change
your imports to use this new crate name.
This will install the codspeed-bencher-compat crate and rename it to bencher
in your Cargo.toml. This way, you can keep your existing imports and the
compatibility layer will take care of the rest.
Using the compatibility layer wonโt change the behavior of your existing
benchmark suite of the CodSpeed instrumentation environment and the benches will
still run it as usual.
$ cargo codspeed build Finished release [optimized] target(s) in 0.12s Finished built 1 benchmark suite(s)$ cargo codspeed run Collected 1 benchmark suite(s) to run Running exampleUsing codspeed-bencher-compat v1.0.0 compatibility layerNOTICE: codspeed is enabled, but no performance measurement will be made since it's running in an unknown environment.Checked: benches/example.rs::a (group: benches)Checked: benches/example.rs::b (group: benches) Done running bencher_example Finished running 1 benchmark suite(s)
To generate performance reports, you need to run the benchmarks in your CI. This
allows CodSpeed to automatically run benchmarks and warn you about regressions
during development.
If you want more details on how to configure the CodSpeed action, you can check
out the Continuous Reporting section.
Here is an example of a GitHub Actions workflow that runs the benchmarks and
reports the results to CodSpeed on every push to the main branch and every
pull request:
.github/workflows/codspeed.yml
Copy
Ask AI
name: CodSpeed Benchmarkson: push: branches: - "main" # or "master" pull_request: # `workflow_dispatch` allows CodSpeed to trigger backtest # performance analysis in order to generate initial data. workflow_dispatch:jobs: benchmarks: name: Run benchmarks runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Setup rust toolchain, cache and cargo-codspeed binary uses: moonrepo/setup-rust@v1 with: channel: stable cache-target: release bins: cargo-codspeed - name: Build the benchmark target(s) run: cargo codspeed build - name: Run the benchmarks uses: CodSpeedHQ/action@v4 with: mode: instrumentation run: cargo codspeed run token: ${{ secrets.CODSPEED_TOKEN }} # optional for public repos
With Rust, if you use multiple packages, a first sharding optimization is to
split your benchmarks across these packages.For example, using Github Actions:
Copy
Ask AI
jobs: benchmarks: name: Run benchmarks runs-on: ubuntu-latest strategy: matrix: package: - my-first-package - my-second-package steps: - uses: actions/checkout@v4 - name: Setup rust toolchain, cache and cargo-codspeed binary uses: moonrepo/setup-rust@v1 with: channel: stable cache-target: release bins: cargo-codspeed - name: Build the benchmark target(s) run: cargo codspeed build -p ${{ matrix.package }} - name: Run the benchmarks uses: CodSpeedHQ/action@v4 with: mode: instrumentation run: cargo codspeed run # only runs the built benchmarks token: ${{ secrets.CODSPEED_TOKEN }} # optional for public repos
It is not required to pass a -p flag as only the benchmarks built by cargo codspeed build will be run.
Same benchmark with different variationsFor now, you cannot run the same benchmarks several times within the same run.
If the same benchmark is run multiple times, you will receive the following
comment on your pull request: