Create benchmarks for your C++ codebase using google_benchmark
To use CodSpeed in your C++ codebase, you can use
CodSpeed’s google_benchmark library,
which is a compatibility layer to run both instrumented and walltime CodSpeed
benchmarks.
CodSpeed integrates with the google_benchmark library. Here is a small example
on how to declare benchmarks. Otherwise, any existing benchmarks of your project
can be reused.
main.cpp
Copy
Ask AI
// Define the function under teststatic void BM_StringCopy(benchmark::State &state) { std::string x = "hello"; // Google benchmark relies on state.begin() and state.end() to run the benchmark and count iterations for (auto _ : state) { std::string copy(x); // Use DoNotOptimize and ClobberMemory to prevent the compiler optimizing away your benchmark // See: https://google.github.io/benchmark/user_guide.html#preventing-optimization benchmark::DoNotOptimize(copy); benchmark::ClobberMemory(); }}// Register the benchmarked to be called by the executableBENCHMARK(BM_StringCopy);static void BM_memcpy(benchmark::State &state) { char *src = new char[state.range(0)]; char *dst = new char[state.range(0)]; memset(src, 'x', state.range(0)); for (auto _ : state) { memcpy(dst, src, state.range(0)); benchmark::DoNotOptimize(dst); benchmark::ClobberMemory(); } delete[] src; delete[] dst;}BENCHMARK(BM_memcpy)->Range(8, 8 << 10);// Entrypoint of the benchmark executableBENCHMARK_MAIN();
Make sure that your benchmarks aren’t optimized away by the compiler by using
benchmark::DoNotOptimize and benchmark::ClobberMemory. Checkout the
Google benchmark user guide on preventing
optimization
for more information.
To build and run benchmarks, CodSpeed officially support usage of the
google_benchmark library using both CMake and Bazel.
If you are using another build system, you may find guidelines in the
custom build systems section
To use CodSpeed’s google_benchmark integration using
CMake, you can declare a benchmark
executable as follows:
CMakeLists.txt
Copy
Ask AI
cmake_minimum_required(VERSION 3.12)include(FetchContent)project(my_codspeed_project VERSION 0.0.0 LANGUAGES CXX)# Enable release mode with debug symbols to display useful profiling dataset(CMAKE_BUILD_TYPE RelWithDebInfo)set(BENCHMARK_DOWNLOAD_DEPENDENCIES ON)FetchContent_Declare( google_benchmark GIT_REPOSITORY https://github.com/CodSpeedHQ/codspeed-cpp # Target the codspeed cpp repository SOURCE_SUBDIR google_benchmark # Make sure to target the google_benchmark subdirectory GIT_TAG main # Or chose a specific version or git ref, check the releases page on the repository)FetchContent_MakeAvailable(google_benchmark)# Declare your benchmark executable and its sources hereadd_executable(my_benchmark_executable benches/bench.cpp)# Link your executable against the `benchmark::benchmark`, the `google_benchmark` library# Note: the first argument must match the first argument of the `add_executable` calltarget_link_libraries(my_benchmark_executable benchmark::benchmark)
Checkout the
releases page if you want
to target a specific version of the library.
This example is a dedicated CMakeLists.txt file for the benchmark executable.
You can also add an executable target to your existing project’s
CMakeLists.txt. Make sure to link this target against the
benchmark::benchmark library.
$ mkdir build && cd build$ cmake -DCODSPEED_MODE=instrumentation ..-- The CXX compiler identification is GNU 14.2.1-- Detecting CXX compiler ABI info-- Detecting CXX compiler ABI info - done-- ...-- Configuring done (8.6s)-- Generating done (0.1s)-- Build files have been written to: /home/user/project-benchmark/build$ make[ 1%] Building CXX object ... ... ...[100%] Built target my_benchmark_executable
Please note the -DCODSPEED_MODE=instrumentation flag in the cmake command.
This will enable the CodSpeed instrumentation mode for the benchmark executable,
where each benchmark is run only once on a simulated CPU.You can also use -DCODSPEED_MODE=walltime if you are building for walltime
codspeed reports, see dedicated documentation for more
information.If you omit the CODSPEED_MODE cmake flag, CodSpeed will not be enabled in the
benchmark executable, and it will run as a regular benchmark.
In order to get the most out of CodSpeed reports, debug symbols need to be
enabled within your executable. In the example above, this is done by
setting CMAKE_BUILD_TYPE to RelWithDebInfo.
To generate performance reports, you need to run the benchmarks in your CI. This
allows CodSpeed to automatically run benchmarks and warn you about regressions
during development.
If you want more details on how to configure the CodSpeed action, you can check
out the Continuous Reporting section.
Here is an example of a GitHub Actions workflow that runs the benchmarks and
reports the results to CodSpeed on every push to the main branch and every
pull request:
.github/workflows/codspeed.yml
Copy
Ask AI
name: CodSpeed Benchmarkson: push: branches: - "main" # or "master" pull_request: # `workflow_dispatch` allows CodSpeed to trigger backtest # performance analysis in order to generate initial data. workflow_dispatch:jobs: benchmarks: name: Run benchmarks runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Build the benchmark target(s) run: | mkdir build cd build cmake -DCODSPEED_MODE=instrumentation .. make -j - name: Run the benchmarks uses: CodSpeedHQ/action@v4 with: mode: instrumentation run: ./build/my_benchmark_executable # Replace with the proper executable path token: ${{ secrets.CODSPEED_TOKEN }} # optional for public repos
If your benchmarks are taking too much time to run under the CodSpeed action,
you can run them in parallel to speed up the execution.To parallelize your benchmarks, first split them in multiple executables that
each run a subset of your benches.
CMakelists.txt
Copy
Ask AI
# Create individual benchmark executablesset(BENCHMARKS first_bench second_bench third_bench)# Add `bench_name` target with `bench_name.cpp` source for each bench listed aboveforeach(benchmark IN LISTS BENCHMARKS) add_executable(${benchmark} benches/${benchmark}.cpp) target_link_libraries(${benchmark} benchmark::benchmark )endforeach()# Create a custom target to run all benchmarks locallyadd_custom_target(run_all_benchmarks COMMAND ${CMAKE_COMMAND} -E echo "Running all benchmarks...")# Register each benchmark target as a dependency offoreach(benchmark IN LISTS BENCHMARKS) add_custom_command( TARGET run_all_benchmarks POST_BUILD COMMAND ${CMAKE_COMMAND} -E echo "Running ${benchmark}..." COMMAND $<TARGET_FILE:${benchmark}> WORKING_DIRECTORY ${CMAKE_BINARY_DIR} )endforeach()
Then update your CI workflow to run benchmarks executable by executable
Copy
Ask AI
jobs: benchmarks: name: Run benchmarks runs-on: ubuntu-latest strategy: matrix: target: [first_bench, second_bench, third_bench] steps: - uses: actions/checkout@v4 - name: Build the benchmark target run: | mkdir build cd build cmake -DCODSPEED_MODE=instrumentation .. make -j ${{ matrix.target }} - name: Run the benchmarks uses: CodSpeedHQ/action@v4 with: mode: instrumentation run: ./build/${{ matrix.target }} token: ${{ secrets.CODSPEED_TOKEN }} # optional for public repos
Import the library by adding this to your WORKSPACE file:
WORKSPACE
Copy
Ask AI
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")http_archive( name = "codspeed_cpp", # Name the codspeed_cpp will be imported as # Target the main branch automatically, or select a specific version urls = ["https://github.com/CodSpeedHQ/codspeed-cpp/archive/refs/heads/main.zip"], strip_prefix = "codspeed-cpp-main",)
Then, define your benchmark target in your packags’s BUILD.bazel file:
path/to/bench/BUILD.bazel
Copy
Ask AI
cc_binary( name = "my_benchmark", # Name of your benchmark target srcs = glob(["*.cpp", "*.hpp"]), # Or define sources however you wish deps = [ "@codspeed_cpp//google_benchmark:benchmark", # codspeed_cpp must match the name you imported the lib in WORKSPACE ],)
--compilation_mode=dbg: enables debug symbols in the compiled binary, used
to generate meaningful CodSpeed reports.
--copt=-O2: sets the desired level of compiler optimizations in the
benchmarks binary.
Setting default build optionsIf you do not want to specify these flags everytime, you can create a .bazelrc
file at the root of the bazel workspace with the following content
To generate performance reports, you need to run the benchmarks in your CI. This
allows CodSpeed to automatically run benchmarks and warn you about regressions
during development.
If you want more details on how to configure the CodSpeed action, you can check
out the Continuous Reporting section.
Here is an example of a GitHub Actions workflow that runs the benchmarks and
reports the results to CodSpeed on every push to the main branch and every
pull request:
.github/workflows/codspeed.yml
Copy
Ask AI
name: CodSpeed Benchmarkson: push: branches: - "main" # or "master" pull_request: # `workflow_dispatch` allows CodSpeed to trigger backtest # performance analysis in order to generate initial data. workflow_dispatch:jobs: benchmarks: name: Run benchmarks runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Set up Bazel uses: bazelbuild/setup-bazelisk@v2 - name: Build the benchmark target run: | bazel build //path/to/bench:my_benchmark --@codspeed_cpp//core:codspeed_mode=instrumentation - name: Run the benchmarks uses: CodSpeedHQ/action@v4 with: mode: instrumentation run: | bazel run //path/to/bench:my_benchmark --@codspeed_cpp//core:codspeed_mode=instrumentation token: ${{ secrets.CODSPEED_TOKEN }} # optional for public repos
Separated build and run stepsNote that we separated the build and run steps in the CI workflow. This is
important to speed up the CI workflow and avoiding instrumenting the build step.
Sources are located in the
codspeed-cpp repository. You can
either clone the repository, add it as a submodule or even download the sources
as a zip file.
Sources of the google_benchmark CodSpeed integration library are located in
the
google_benchmark subdirectory. 3.
Make sure the following pre-processor variables are defined when you build the
libraryWhen building the library, the tricky part is to make sure google_benchmark’s
fork has access to the
codspeed-core
library.Additionally, the following pre-processor variables must be defined:
CODPSEED_ENABLED: if not defined, google_benchmark will the same as the
upstream library, with no CodSpeed features.
CODSPEED_INSTRUMENTATION: if running in instrumentation mode
CODSPEED_WALLTIME: if running in walltime mode
CODSPEED_ROOT_DIR: absolute path to the root directory of your project. This
is used in the report to display file path relative to your root project
If you run into issues integrating CodSpeed’s google_benchmark library with
your project, please reach out and open an issue on the
codspeed-cpp repository.