Commits
Click on a commit to change the comparison rangereduce cost of large variant matrix (#5392)
* discard unused variants before copying metadata
when variant matrix is large and mostly unused (as in conda-forge),
the length of input_variants may be several thousand
when only a few are actually used.
This causes `get_loop_vars` and `metadata.copy()` to become very expensive.
* try reducing with all used vars instead of loop vars
should reduce less
* perf: copy distributed variants list after subsetting
vastly reduces the number of copies computed for large variant matrices
* perf: pass used_vars subset to get_loop_vars
rather than computing all loop vars and then intersecting,
only consider relevant keys when computing loop vars
reduces get_used_loop_vars from O(n_vars * n_variants) to O(n_used_vars * n_variants)
* remove redundant deepcopy of config.variant
config.copy already copies this, no need to do it twice in metadata.copy
* add config.copy_variants method
to avoid calling pickle in too many places
* Update news/5392-variant-copy
* Add benchmark test for `render_recipe` (#5490)
---------
Co-authored-by: Matthew R. Becker <beckermr@users.noreply.github.com>
Co-authored-by: Bianca Henderson <beeankha@gmail.com>
Co-authored-by: Ken Odegard <kodegard@anaconda.com> Changelog 24.9.0 (#5484)
* Update .authors.yml
* Update news
* Updated authorship for 24.9.0
* Updated CHANGELOG for 24.9.0
---------
Co-authored-by: Ken Odegard <kodegard@anaconda.com>
Co-authored-by: Katherine Kinnaman <kkinnaman@anaconda.com> Require `setuptools <71.1.0` (#5496)3 months ago
by kenodegard