Skip to main content
The walltime instruments allow measuring the walltime of your benchmarks directly in the CI. It leverages bare-metal runners managed and provided by CodSpeed to measure the performance of your benchmarks with low noise and high precision.
Example of a walltime benchmark run

Example of a walltime benchmark run

Supported languages and integrationsAt the moment, the walltime instrument is supported for the following languages and integrations:If you want to use the walltime instrument with a different language or integration, please reach out on Discord or email our support.

What Does the Walltime Instrument Measure?

The walltime instrument measures the actual elapsed time (also known as “wall clock time”) of your benchmark execution. Unlike CPU simulation which measures simulated CPU cycles, walltime captures the real-world duration including:
  • All code execution: Both user-space code and system calls are included in the measurement, giving you a complete picture of actual runtime performance.
  • I/O operations: Network requests, file system operations, and other I/O bound tasks are fully captured, making this instrument ideal for benchmarks that interact with external systems.
  • Parallelism effects: Multi-threaded code benefits are accurately measured since walltime reflects the actual elapsed time, not CPU time across threads.
This makes the walltime instrument particularly valuable when you need to measure performance beyond what the CPU simulation instrument can capture, such as integration tests on API endpoints or workloads that rely on external dependencies.
Multiple benchmark processesWith the walltime instrument, you should try and avoid running multiple benchmark processes in parallel since this can lead to noisy measurements.Thus, using pytest-xdist or similar tools is not recommended.

Automated Profiling

When using the walltime instrument, CodSpeed automatically collects profiling data and generates flame graphs for each benchmark. This allows you to quickly identify performance changes and their root causes. Function list

Inspector Metrics

Flamegraph inspector When you hover over a span in the flame graph, the inspector displays the following metrics: Common metrics:
  • Self time: The measured execution time spent in the function body only, excluding time spent in child function calls.
  • Total time: The measured execution time spent in the function including all its children.
Execution events The walltime instrument also collects hardware events during execution. This happens automatically when events are available. All displayed event counts are cumulative and include events from child function calls.
  • CPU Cycles: The number of CPU cycles elapsed.
  • Instructions: The number of CPU instructions executed.
  • Memory R/W: The number of memory read and write operations performed.
  • Memory Access Pattern: A breakdown of how memory accesses were served:
    • L1 Cache Hits: Memory accesses served from the fastest CPU cache.
    • L2 Cache Hits: Memory accesses served from the second-level cache.
    • Cache Misses: Memory accesses that required fetching from main memory.
    • Memory access distribution: Total bytes read from and written to memory, for each level of cache. It is calculated based on the number of events, and the average size of each access, namely a word for a cache access, and a cache line for a cache miss.
Sampling accuracyEvent counts are collected using hardware performance counter sampling. The deeper you navigate into leaf functions, the more susceptible these counts suffer from to sampling-related inaccuracies. For the most reliable data, focus on higher-level functions in the call stack.

Usage with GitHub Actions

Requirements:
  • CodSpeedHQ/action >= 3.1.0
The CI setup is exactly the same as the one with the CPU simulation but instead of running on a GitHub-hosted runner, you’ll need to request a “codspeed-macro” runner. Macro Runners are bare-metal machines managed by CodSpeed and will provide you with a more stable and precise environment to run your benchmarks. Simply replace the runs-on: ubuntu-latest line with runs-on: codspeed-macro and use the mode: walltime option in the action:
jobs:
  benchmarks:
    name: Run benchmarks
    runs-on: codspeed-macro
    steps:
      - uses: actions/checkout@v5
      # ...
      # Setup your environment here:
      #  - Configure your Python/Rust/Node version
      #  - Install your dependencies
      #  - Build your benchmarks (if using a compiled language)
      # ...
      - name: Run the benchmarks
        uses: CodSpeedHQ/action@v4
        with:
          mode: walltime
          run: <Insert your benchmark command here>
Your benchmarks will now run on a CodSpeed-managed runner, the action and the benchmark integration will automatically collect walltime data and you’ll be able to see the new measurements in the CodSpeed dashboard.
CachingIf you’re using caches in the GitHub action workflow, make sure your cache keys include runner.arch to avoid cache misses since CodSpeed Macro runners are running on the ARM64 architecture.For example:
- uses: actions/cache@v4
  with:
    path: /home/.cache/pip
    key: pip-${{ runner.arch }}-${{ hashFiles('pyproject.toml') }}

Usage on personal GitHub accounts

At the moment, the macro runners are only available for organizations and not for personal accounts. This is because registering GitHub self-hosted runners on repositories instead of organizations would require the Repository: Administration (Read/Write) permission, which is too broad. To use the macro runners on a repository owned by a personal GitHub account, the only solution is to create a new organization and transfer the repository to that organization.

Usage on public repositories

By default, the macro runners are only available for the private repositories of your organization. If you want to use them on a public repository, you’ll need to explicitly allow them from your GitHub organization settings (under Organization Settings > Actions > Runner groups > Default). Allowing macro runners on a public repository

Next Steps