vortex-data
vortex
Blog
Docs
Changelog
Blog
Docs
Changelog
Overview
Branches
Benchmarks
Runs
Performance History
Latest Results
change nullability semantics Signed-off-by: Connor Tsui <connor.tsui20@gmail.com>
ct/tq-pull-out
9 minutes ago
less stuff Signed-off-by: Adam Gutglick <adam@spiraldb.com>
adamg/execution-tests
2 hours ago
fsst: link tests_large doc comment to tracking issue #7833 Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: mprammer <martin@spiraldb.com>
mp/fsst-i32-overflow-regression-test
2 hours ago
clean up Signed-off-by: Connor Tsui <connor.tsui20@gmail.com>
ct/tq-pull-out
2 hours ago
fsst: regression test for i32 offset overflow in fsst_compress `fsst_compress_iter` (encodings/fsst/src/compress.rs:72) hardcodes `VarBinBuilder::<i32>` for the compressed output, so any input whose cumulative compressed bytes exceed `i32::MAX` panics in `vortex-array/src/arrays/varbin/builder.rs:62` with Other error: Failed to convert sum of N and M to offset of type i32 Hit in practice on a real >4 GiB string column going through `vxio.write`. The bug isn't in the input-conversion path — that's zero-copy and respects the input offset width — so widening the input to `large_string` (i64 offsets) at the pyarrow side does NOT help; FSST's output builder runs either way. Add a stress regression test that constructs a `VarBinArray<i64>` with ~2.5 GiB of high-entropy ASCII (FSST cannot compress it below the i32 ceiling) and runs `fsst_compress` end-to-end. The test currently panics with the documented message; it's wrapped in `#[should_panic]` so the test passes today and trips when the underlying bug is fixed — at which point the maintainer drops `#[should_panic]` and the trailing `assert_eq!(compressed.len(), len)` becomes the live assertion. Gated with `#[test_with::env(CI)]` + `#[test_with::no_env(VORTEX_SKIP_SLOW_TESTS)]` (matching the precedent in vortex-btrblocks/src/schemes/integer.rs:1113) because the test allocates ~5 GiB peak and runs in ~6 s under release. Verified locally: - `cargo test -p vortex-fsst fsst_compress_offsets` → ignored, because variable CI not found - `CI=1 cargo test --release -p vortex-fsst fsst_compress_offsets` → 1 passed (panics as expected, captured by should_panic) - `CI=1 VORTEX_SKIP_SLOW_TESTS=1 cargo test --release -p vortex-fsst fsst_compress_offsets` → ignored, because variable VORTEX_SKIP_SLOW_TESTS was found - `cargo +nightly fmt --all` clean - `cargo clippy -p vortex-fsst --all-targets --all-features` clean Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: mprammer <martin@spiraldb.com>
mp/fsst-i32-overflow-regression-test
2 hours ago
clean up Signed-off-by: Connor Tsui <connor.tsui20@gmail.com>
ct/tq-pull-out
2 hours ago
bench[gpu]: CUDA device memory pool benchmarks (#7831) Adds benchmarks that compared pooled vs unpooled memory allocation. Main insight being that the memory release threshold for a pool can be adjusted by the `CU_MEMPOOL_ATTR_RELEASE_THRESHOLD` attribute. ``` cuda/core_primitives/device_alloc_reuse/default_pool/1GiB time: [31.125 ms 31.208 ms 31.292 ms] change: [-5.1888% -2.7137% -0.3753%] (p = 0.04 < 0.05) cuda/core_primitives/device_alloc_reuse/default_pool_75pct_threshold/1GiB time: [1.2864 µs 1.3728 µs 1.4528 µs] change: [-7.2072% +0.4684% +8.9330%] (p = 0.91 > 0.05) ``` Signed-off-by: Alexander Droste <alexander.droste@protonmail.com>
develop
3 hours ago
bench[gpu]: CUDA device memory pool benchmarks Signed-off-by: Alexander Droste <alexander.droste@protonmail.com>
ad/buffer-pool-benchmarks
3 hours ago
Latest Branches
CodSpeed Performance Gauge
0%
TurboQuant again!
#7829
14 minutes ago
c3bd48c
ct/tq-pull-out
CodSpeed Performance Gauge
+59%
Exectuion and optimization tracing harness for testing
#7814
2 hours ago
1538135
adamg/execution-tests
CodSpeed Performance Gauge
0%
fsst: regression test for i32 offset overflow in fsst_compress
#7832
2 hours ago
0cfb69f
mp/fsst-i32-overflow-regression-test
© 2026 CodSpeed Technology
Home
Terms
Privacy
Docs