Read our detailed post on how CI noise affects benchmark
consistency.
Setup
1. Allow the Action to upload on CodSpeed
First, you need to get your CodSpeed Token. There are multiple ways to retrieve it:- Once you enable a repository on CodSpeed, you’ll be prompted to copy the token
- You can also find it on the repository settings page

Token Scope: Be mindful that a token is scoped to a specific repository. Make sure that you
are on the correct repository settings page when copying the token.
Public repositoriesCodSpeed allows tokenless uploads for public repositories, allowing runs to be
triggered from public forks directly.To enable this, you don’t need to store the token and you can simply omit the
token
input when
creating the benchmarks workflow.CODSPEED_TOKEN
and the value of your token.
2. Create the benchmarks workflow
The next step is to create a new workflow to run the benchmarks for your repository. You can do this by creating thecodspeed.yml
file in the .github/workflows
directory with the following content:
.github/workflows/codspeed.yml
CodSpeedHQ/action
. This action will
configure the CodSpeed environment and upload the benchmarks results.
Make sure to include
pull_request
in the on
section of the workflow. This is
required to have reports on pull requests correctly working.Learn more about baseline report selection.Sample configurations
- Python (with
pytest-codspeed
) - Rust (with
cargo-codspeed
) - Node.js (with
codspeed-node
and TypeScript)
3. Check the results
Once the workflow is created, your pull requests will receive a performance report comment and will also receive some additional checks:

4. Next Steps
Now that everything is up and running (and hopefully green 🎉), you can start enhancing your workflow to get the most out of CodSpeed.Explore the Performance Metrics
Understand the performance metrics generated by CodSpeed
Enforce Performance Checks
Make sure you or team members never merge unexpected performance regressions
Dive in Trace Generation
Get detailed performance traces for your benchmarks
Shard the execution of your benchmarks
Run your benchmarks in parallel to speed up your CI
Advanced
Defining environment variables
You can define environment variables for your benchmarks by using theenv
key
in the section of the action:
Running benchmarks in parallel CI jobs
With Github Actions, you can leverage matrix jobs to improve the performance of your benchmarks. For example, usingpytest
:
.github/workflows/codspeed.yml
CodSpeed will only be able to aggregate results from your benchmarks if you
split them within a single CI workflow.
CodSpeed does not yet support running the same benchmarks multiple times. If
your matrix has several dimensions (e.g. a runtime version), please ensure that
each benchmark will only run once.If the same benchmark is run multiple times, you will receive the following
comment on your pull request:

Compatibility
For now, only the following OS and versions are supported on the runners:- Ubuntu 22.04 and later
- Debian 12 and later