weiji14
cog3pio
Blog
Docs
Changelog
Blog
Docs
Changelog
Overview
Branches
Benchmarks
Runs
Performance History
Latest Results
:fire: Remove double call workaround in test_cudacogreader_to_dlpack By switching tokio runtime from `new_current_thread` to `new_multi_thread`. Reason isn't entirely clear, but found out that something in the `path_to_stream` function must have been the issue because local files didn't need the double CudaCogReader call hack, so something was wrong with remote HTTP files. ObjectStore wasn't the issue, but tokio was for some reason!
dlpack_to_cupy
4 days ago
:sparkles: Support per-thread default stream in __dlpack__ method Refactor PyCudaCogReader to have CUDA stream be instantiated in `__dlpack__` instead of `new` method, enabling TIFF decoding on per-thread default stream instead of just legacy default stream. The __dlpack_device__ still returns a hardcoded GPU:0 device int, but this is now linked to the PyCudaCogReader.device attribute.
dlpack_to_cupy
4 days ago
:sparkles: Support per-thread default stream in __dlpack__ method Refactor PyCudaCogReader to have CUDA stream be instantiated in `__dlpack__` instead of `new` method, enabling TIFF decoding on per-thread default stream instead of just legacy default stream. The __dlpack_device__ still returns a hardcoded GPU:0 device int, but this is now linked to the PyCudaCogReader.device attribute.
dlpack_to_cupy
4 days ago
:recycle: Refactor to pass in per-thread default stream to dlpack method (#67) * :recycle: Refactor to pass in CudaStream in dlpack method instead of new Defer GPU device memory allocation to when `dlpack()` method is called, rather than when CudaCogReader struct is instantiated. Partially reverts 3bba2c05004d607cf8db919f7d562a2cdb209add in #27. This aligns better with the [__dlpack__](https://data-apis.org/array-api/2024.12/API_specification/generated/array_api.array.__dlpack__.html) standard where stream is passed in then. Main aha moment is realizing I can create the DLPack tensor first, and then use tensor.data_ptr() for the nvtiffDecodeImage call, thereby avoiding some nasty upgrade_device_ptr calls and casts. * :thread: Use per-thread default stream instead of legacy default stream Xref https://docs.nvidia.com/cuda/cuda-runtime-api/stream-sync-behavior.html
main
5 days ago
:thread: Use per-thread default stream instead of legacy default stream Xref https://docs.nvidia.com/cuda/cuda-runtime-api/stream-sync-behavior.html
refactor/stream_in_dlpack
5 days ago
:recycle: Refactor to pass in CudaStream in dlpack method instead of new Defer GPU device memory allocation to when `dlpack()` method is called, rather than when CudaCogReader struct is instantiated. Partially reverts 3bba2c05004d607cf8db919f7d562a2cdb209add in #27. This aligns better with the [__dlpack__](https://data-apis.org/array-api/2024.12/API_specification/generated/array_api.array.__dlpack__.html) standard where stream is passed in then. Main aha moment is realizing I can create the DLPack tensor first, and then use tensor.data_ptr() for the nvtiffDecodeImage call, thereby avoiding some nasty upgrade_device_ptr calls and casts.
refactor/stream_in_dlpack
5 days ago
:arrow_up: SPEC 0: Bump min version to Python 3.13, xarray 2024.10.0 (#66) * :arrow_up: SPEC 0: Bump minimum supported version to Python 3.13 Following [SPEC 0](https://scientific-python.org/specs/spec-0000/) policy where Python 3.12 should be dropped in 2026 quarter 4. Still 9 months away, but want to be proactive in enabling freethreading builds. * :arrow_up: SPEC 0: Bump minimum supported versions to xarray>=2024.10.0 Following SPEC 0 policy. Bumps minimum supported xarray version from 2023.12.0 to 2024.10.0 in the pyproject.toml file.
main
5 days ago
:arrow_up: SPEC 0: Bump minimum supported versions to xarray>=2024.10.0 Following SPEC 0 policy. Bumps minimum supported xarray version from 2023.12.0 to 2024.10.0 in the pyproject.toml file.
spec0/python-3.13
5 days ago
Active Branches
:sparkles: Python bindings for CudaCogReader
last run
4 days ago
#58
CodSpeed Performance Gauge
-24%
© 2026 CodSpeed Technology
Home
Terms
Privacy
Docs