Skip to main content
The memory instrument captures detailed memory usage of your benchmarks, helping you identify and optimize allocations before shipping to production.

What Does the Memory Instrument Measure?

Example of memory for a benchmark, showing peak memory usage, average allocation size, total allocated memory and allocation count

Memory run of a benchmark

  • Peak Memory Usage: The maximum memory consumed at any single point during execution. This determines the minimum RAM requirements for your application and helps prevent out-of-memory errors on constrained systems or expensive swapping.
  • Average Allocation Size: The average size of each heap allocation. Smaller allocations can lead to better cache locality and less memory fragmentation.
  • Total Allocations: The total amount of heap memory allocated throughout your benchmark execution. Fewer heap allocations typically mean better cache locality and less pressure on the memory allocator, making this a key optimization target for performance-critical code.
  • Allocation Count: The number of individual allocation operations performed during the benchmark. Since each allocation has overhead, high allocation counts can indicate excessive temporary object creation, impacting both performance and memory fragmentation.
  • Memory Usage Over Time: The timeline shows how the peak memory evolves throughout benchmark execution. This graph reveals memory patterns, like steady-state behavior, gradual growth, or periodic spikes.
    Example of memory showing heap allocations, peak memory usage, allocation count, and memory leak detection in a benchmark

    Memory usage over time

Best Practices

To get the most out of memory, consider these recommendations:
  • Run benchmarks with realistic workloads - Use production-representative data sizes and patterns to capture actual memory behavior rather than toy examples
  • Focus optimization on hot paths - Prioritize reducing allocations in frequently called code, as allocation count in hot paths can significantly impact both memory and CPU performance
  • Combine with CPU profiling - Memory and CPU metrics together reveal the full performance story; high allocation counts often correlate with CPU overhead
  • Track trends over time - Compare memory metrics across benchmark runs to catch regressions early and validate that optimizations remain effective as code evolves
Just like performance regressions, memory regressions can be caught in CI. If you notice unexpected increases in heap allocations or allocation counts, it’s often a sign that code changes have introduced inefficiencies.

How Does It Work?

CodSpeed builds your benchmarks to run only once while measuring the memory. The profiling is done using a custom eBPF program ensuring stability and minimal overhead (depending on how allocation-heavy the benchmark is). The tracking is done by either instrumenting the dynamically loaded allocator libraries or your benchmark executable (when using a statically linked allocator). We track all allocation related functions (e.g. malloc, free, …) in your benchmark and spawned sub-processes.

Usage with GitHub Actions

Requirements:
  • CodSpeedHQ/action >= 4
  • A supported benchmark framework (see Language Support above)
To enable memory in your GitHub Actions workflow, use mode: memory in the CodSpeed Action configuration:
jobs:
  benchmarks:
    name: Run benchmarks
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v5
      # ...
      # Setup your environment here:
      #  - Configure your Python/Rust/Node version
      #  - Install your dependencies
      #  - Build your benchmarks (if using a compiled language)
      # ...
      - name: Run the benchmarks
        uses: CodSpeedHQ/action@v4
        with:
          mode: memory
          run: <Insert your benchmark command here>
The CodSpeed action will automatically:
  • Instrument your benchmarks to capture memory metrics
  • Run your benchmarks once with memory tracking enabled
  • Upload results to the CodSpeed dashboard
Getting started with your languageMake sure you’ve already set up benchmarks using one of the supported frameworks:Memory works with your existing benchmark code, no changes required!

Language Support

Memory is currently available for: If you want to use memory with other languages, please reach out on Discord or email our support.

Next Steps