Noticed the .mjs extension? This is because we’re using the ESM module format.
Saving our file with the .js extension would have worked as well, but we would
have needed to add "type": "module" to our package.json file to instruct
Node.js to use the ESM module format.
Here, a few things are happening:
We create a simple recursive fibonacci function.
We create a new Bench instance with CodSpeed support by using the
withCodSpeed helper. This step is critical to enable CodSpeed on
your benchmarks.
We add two benchmarks to the suite and launch it, benching our fibonacci
function for 10 and 15.
$ node benches/bench.mjs┌─────────┬───────────────┬───────────────────┬──────────┐│ (index) │ Task Name │ Average Time (ns) │ Margin │├─────────┼───────────────┼───────────────────┼──────────┤│ 0 │ 'fibonacci10' │ 552.4139857896414 │ '±0.18%' ││ 1 │ 'fibonacci15' │ 5633.276191749634 │ '±0.14%' │└─────────┴───────────────┴───────────────────┴──────────┘
And… Congrats🎉, CodSpeed is installed in your benchmarking suite! When not
used in the CI environment or with the
CLI, CodSpeed will fallback to using the
default tinybench.You can now
run those benchmarks in your CI to get
consistent performance measurements.
Integrating into a bigger project, multiple benchmark files
Often time you will not be writing your benchmarks in a single file. Indeed, it
can become quite difficult to maintain a single file with all your benchmarks as
your project grows.
You can find the source code for the following example in the
examples of the codspeed-node repository.
There are multiple examples available, for CJS, ESM, JavaScript, and TypeScript.
For these kind of situations, we recommend the following approach. Let’s say you
have a file structure like this, in a project with TypeScript:
The src directory contains the source code of the project. Here we have two
files, fibonacci.ts and foobarbaz.ts.
The bench directory contains the benchmarks for the project. There is a file
for each source file that defines benchmarks for it.
The bench/index.bench.ts file is the entry point for the benchmarks. It
imports all the other benchmark files and runs them.
bench/fibo.bench.ts
Copy
Ask AI
import { Bench } from "tinybench";import { iterativeFibonacci } from "../../src/fibonacci";export function registerFiboBenchmarks(bench: Bench) { bench .add("test_iterative_fibo_10", () => { iterativeFibonacci(10); }) .add("test_iterative_fibo_100", () => { iterativeFibonacci(100); });}
Here we define a function that takes an instance of Bench as a parameter and
then adds some benchmarks to it. This will allow us to add benchmarks to the
same suite from multiple files.
bench/index.bench.ts
Copy
Ask AI
import { withCodSpeed } from "@codspeed/tinybench-plugin";import { Bench } from "tinybench";import { registerFiboBenchmarks } from "./fibo.bench";import { registerFoobarbazBenchmarks } from "./foobarbaz.bench";export const bench = withCodSpeed(new Bench());(async () => { registerFiboBenchmarks(bench); registerFoobarbazBenchmarks(bench); await bench.run(); console.table(bench.table());})();
Here all the functions registering benchmarks are executed to import all the
benchmarks from the different files.To run the benchmarks, use the following command:
To generate performance reports, you need to run the benchmarks in your CI. This
allows CodSpeed to automatically run benchmarks and warn you about regressions
during development.
If you want more details on how to configure the CodSpeed action, you can check
out the Continuous Reporting section.
Here is an example of a GitHub Actions workflow that runs the benchmarks and
reports the results to CodSpeed on every push to the main branch and every
pull request:
.github/workflows/codspeed.yml
Copy
Ask AI
name: CodSpeed Benchmarkson: push: branches: - "main" # or "master" pull_request: # `workflow_dispatch` allows CodSpeed to trigger backtest # performance analysis in order to generate initial data. workflow_dispatch:permissions: # optional for public repositories contents: read # required for actions/checkout id-token: write # required for OIDC authentication with CodSpeedjobs: benchmarks: name: Run benchmarks runs-on: ubuntu-latest steps: - uses: actions/checkout@v5 # ... # Setup your environment here: # - Configure your Python/Rust/Node version # - Install your dependencies # - Build your benchmarks (if using a compiled language) # ... - name: Run the benchmarks uses: CodSpeedHQ/action@v4 with: mode: simulation run: <Insert your benchmark command here>
Execution profiles for async code can sometimes be
unreliable, as profiling tools may lose stack trace information due to the event
loop. If your code is fully sync, consider using tinybench’s runSync as an
entrypoint to improve accuracy.