We released several improvements to the benchmark dashboard and branch pages.
The performance of the benchmark history graph has been greatly improved. This improvement will be largely noticeable in projects with large histories. The vertical axis is now updated when zooming in on the graph, making it easier to see localized changes.
See it in action handling more than 15k data points:
You can now copy a link to a specific benchmark from a Pull Request, a Run, or a custom comparison view. This link will take you directly to the benchmark within the report.
Here is an example:
On the benchmark dashboard, you can now see the benchmark flame graph, computed from the latest run on the default branch.
We're excited to announce the launch of CodSpeed's Advent Calendar, a performance-focused coding challenge based on the popular Advent of Code problems! 🎄
Compete for Speed: Solve daily problems from Advent of Code and optimize for performance in Rust
Prizes: Stand a chance to win incredible prizes, including:
Leaderboard: Track your ranking as you climb to the top of this exciting global competition.
Find more details on the Advent of CodSpeed page.
👾 Join our Discord to connect with other participants and stay updated.
Happy coding and may the fastest Rustacean win! 🦀✨
Until now, CodSpeed allowed you to compare runs only in specific scenarios—either through pull requests comparing branches with their base or comparing consecutive commits on a single branch.
With this new feature, you can now compare any runs: on arbitrary commits or branches, and also local runs made with the CLI.
Want to try it out? Just head to the "Runs" tab of a project (you can find a few of them here) and select the runs you want to compare!
Soon, we'll be adding the ability to compare tags directly, making it even simpler to compare runs across different versions of your project. Stay tuned for more updates!
We're thrilled to unveil Walltime, a groundbreaking addition to CodSpeed's suite of instruments! 🎉 This new tool measures the wall time of your benchmarks—the total time elapsed from start to finish, capturing not just execution but also waiting on resources like I/O, network, or other processes.
At first glance, this might seem like a shift from our core philosophy of making benchmarks as deterministic as possible. But here's the thing: real-world performance often depends on the messy, noisy details of resource contention and latency. Walltime offers a lens into these real-world scenarios, making it an invaluable tool for system-level insights—without losing sight of the precision you trust CodSpeed to deliver.
Even as we introduce this new instrument, we remain committed to bringing consistent and reproducible results. That's why Walltime is exclusively available on CodSpeed Macro Runners. These hosted bare-metal runners are purpose-built for macro benchmarks, running in isolation to minimize environmental noise. This means you get realistic performance data without unnecessary interference.
Running Walltime measurements is as simple as changing the execution runner to
codspeed-macro
in GitHub Actions. Here's an example of how to do it:
jobs:
benchmarks:
name: Run benchmarks
runs-on: ubuntu-latest
runs-on: codspeed-macro
steps:
- uses: actions/checkout@v4
# ...
- name: Run benchmarks
uses: CodSpeedHQ/action@v3
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: "<Insert your benchmark command here>"
Once set up, your Walltime results will shine in the CodSpeed dashboard:
For a deeper dive into the Walltime instrument and Macro Runners, check out our documentation.
This feature is still in closed beta, but if you're interested in trying it out, please reach out to us on Discord or by email at support@codspeed.io.
After a lot of work, we are happy to announce that CodSpeed now supports GitLab Cloud repositories and organizations as well as GitLab CI/CD runs!
Read our docs on setting up a GitLab integration and more details on setting up GitLab CI/CD with CodSpeed.
We'll soon also start supporting self-hosted GitLab instances on the Enterprise plan, so if you have one and are interested in trying it out, please reach out to us by email or on Discord!
We’re thrilled to announce the arrival of our dark theme, designed to enhance your experience, especially during those late-night coding or benchmarking sessions!
It is now possible to specify the default base branch for analysis of a repository. It no longer has to be the default branch set on the repository provider.
This changelog will allow us to keep you updated with the latest features we implement! 🚀
It's now possible to zoom in on the benchmark history graph. Making it possible to dive precisely in the history.
It is now possible to list all the runs of a repository independently from their branch using the new runs page: This page also come with individual run pages, allowing for example to dive into the runs made on push on a default branch:
We just released the beta of the CodSpeed CLI! 🥳
This CLI tool allows to make local runs, upload them to CodSpeed, and compare the results to a remote base run. All without having to push your code to a remote repository.
This will help you shorten the feedback loop on performance, you will not have to push your code and wait for the GH Action to complete to see the impact of your changes.
At the moment it only works on Ubuntu 20.04/22.04 and Debian 11/12.
To get started, you can run the following commands:
# Install the codspeed cli
curl -fsSL https://github.com/CodSpeedHQ/runner/releases/download/v3.0.0/codspeed-runner-installer.sh | bash
source "$HOME/.cargo/env"
# Authenticate the CLI with your CodSpeed account
codspeed auth login
# Inside a repository enabled on CodSpeed
# By default, the local run will be compared to the latest remote run of the default branch
# If you are checked out on a branch that has a pull request and a remote run on CodSpeed, the local run will be compared to the latest common ancestor commit of the default branch that has a remote run
codspeed run [BENCHMARK_COMMAND]
You can now see system calls in the flamegraphs by ticking the "Include system calls" checkbox.
We also now detect benchmarks mostly composed of system calls and display a flakyness warning.