google_benchmark library,
which is a compatibility layer to run both instrumented and walltime CodSpeed
benchmarks.
Writing benchmarks
CodSpeed integrates with thegoogle_benchmark library. Here is a small example
on how to declare benchmarks. Otherwise, any existing benchmarks of your project
can be reused.
main.cpp
Make sure that your benchmarks aren’t optimized away by the compiler by using
benchmark::DoNotOptimize and benchmark::ClobberMemory. Checkout the
Google benchmark user guide on preventing
optimization
for more information.Building & Running benchmarks
To build and run benchmarks, CodSpeed officially support usage of thegoogle_benchmark library using both CMake and Bazel.
If you are using another build system, you may find guidelines in the
custom build systems section
CMake
To use CodSpeed’sgoogle_benchmark integration using
CMake, you can declare a benchmark
executable as follows:
CMakeLists.txt
Checkout the
releases page if you want
to target a specific version of the library.
Building benchmarks
To build the benchmark executable, run:terminal
The CODSPEED_MODE flag
Please note the -DCODSPEED_MODE=simulation flag in the cmake command. This
will enable the CodSpeed CPU simulation mode for the benchmark executable, where
each benchmark is run only once on a simulated CPU.
If you omit the CODSPEED_MODE cmake flag, CodSpeed will not be enabled in the
benchmark executable, and it will run as a regular benchmark.
The CODSPEED_MODE cmake flag can take the following values:
off: defaulted to when the cmake flag is not provided, disables codspeed.simulation: benchmarks are run only once on a simulated CPU.walltime: used for walltime codspeed reports, see dedicated documentationmemory: benchmarks are run once using memory profilinginstrumentation: (deprecated) alias ofsimulation.
Debug symbols
In order to get the most out of CodSpeed reports, debug symbols need to be enabled within your executable. In the example above, this is done by settingCMAKE_BUILD_TYPE to RelWithDebInfo.
Running the benchmarks locally
Simply execute the compiled binary to run the benchmarks.terminal
Running the benchmarks in your CI
To generate performance reports, you need to run the benchmarks in your CI. This allows CodSpeed to automatically run benchmarks and warn you about regressions during development. Here is an example of a GitHub Actions workflow that runs the benchmarks and reports the results to CodSpeed on every push to themain branch and every
pull request:
Running benchmarks in parallel CI jobs
If your benchmarks are taking too much time to run under the CodSpeed action, you can run them in parallel to speed up the execution. To parallelize your benchmarks, first split them in multiple executables that each run a subset of your benches.CMakelists.txt
To combine measurement modes like simulation and memory, check out the
documentation on running multiple instruments
serially.
Bazel
You can also use CodSpeed’sgoogle_benchmark integration with the
Bazel integration.
Building benchmarks
Import the library from the Bazel Central Registry in yourMODULE.bazel file
MODULE.bazel
BUILD.bazel file:
path/to/bench/BUILD.bazel
terminal
Build options
As you may have noticed in the example, there are a few key build options essential for bazel to make full use of the CodSpeed library.--@codspeed_google_benchmark_compat//:codspeed_mode=simulationenables the codspeed features of the library, which can take the following values here:off: defaulted to when the cli flag is not provided, disables codspeed.simulation: benchmarks are run only once on a simulated CPU.walltime: used for walltime codspeed reports, see dedicated documentationmemory: benchmarks are run once using memory profilinginstrumentation: (deprecated) alias ofsimulation.
--compilation_mode=dbg: enables debug symbols in the compiled binary, used to generate meaningful CodSpeed reports.--copt=-O2: sets the desired level of compiler optimizations in the benchmarks binary.
Running the benchmarks locally
You can then run your benchmarks by running:terminal
Running the benchmarks in your CI
To generate performance reports, you need to run the benchmarks in your CI. This allows CodSpeed to automatically run benchmarks and warn you about regressions during development. Here is an example of a GitHub Actions workflow that runs the benchmarks and reports the results to CodSpeed on every push to themain branch and every
pull request:
Separated build and run stepsNote that we separated the build and run steps in the CI workflow. This is
important to speed up the CI workflow and avoiding instrumenting the build step.
Custom build systems
If you need to have full control over your build system, here are guiding steps to take to use codspeed.Get the sources
Sources are located in thecodspeed-cpp repository. You can
either clone the repository, add it as a submodule or even download the sources
as a zip file.
Build the library
Sources of thegoogle_benchmark CodSpeed integration library are located in
the
google_benchmark subdirectory. 3.
Make sure the following pre-processor variables are defined when you build the
library
When building the library, the tricky part is to make sure google_benchmark’s
fork has access to the
codspeed-core
library.
Additionally, the following pre-processor variables must be defined:
CODPSEED_ENABLED: if not defined,google_benchmarkwill the same as the upstream library, with no CodSpeed features.CODSPEED_SIMULATION: if running in simulation mode- Note: For versions prior to v2.0.0, use
CODSPEED_INSTRUMENTATIONinstead.
- Note: For versions prior to v2.0.0, use
CODSPEED_WALLTIME: if running in walltime modeCODSPEED_ROOT_DIR: absolute path to the root directory of your project. This is used in the report to display file path relative to your root project
google_benchmark library with
your project, please reach out and open an issue on the
codspeed-cpp repository.