Commits
Click on a commit to change the comparison rangeperf(astro): group chunks on emit
When we render to an async iterable, we basically start rendering the
given template into chunks and return an iterator which reads from that
same queue of chunks in parallel.
The rendering causes the following process:
- Convert the rendered chunk into a byte array if it isn't one already
- Add the byte array to the queue
- Resolve if nothing left in the queue
The iterator then reads from this same queue and does the following:
- Take the entire contents of the queue right now
- Concat them into one array
- Yield the concatenated array
The bottleneck in this process is `Convert the rendered chunk into a
byte array`.
Basically, if a chunk isn't already a `Uint8Array`, we pass the string
representation through a `TextEncoder` to turn it into one.
This means for 10,000 _string_ chunks, we call `encode` 10,000 times.
**It turns out `TextEncode#encode` is costly.**
This PR reworks this process to the following:
Rendering:
- Convert the rendered chunk into a **string or** byte array if it isn't one
already
- Add the **string or** byte array to the queue
- Resolve if nothing left in the queue
Iterator:
- Take the entire contents of the queue right now
- **Merge consecutive strings into one string, and convert that into a
byte array**
- Concat them into one array (**now the set of arrays consists only of
byte arrays again**)
- Yield the concatenated array
This means we call `TextEncode#encode` on larger strings, and fewer
times - which it turns out is a lot more performant. docs: add comments explaining the optimisation chore: update .changeset/cute-clubs-fetch.md
Co-authored-by: Florian Lefebvre <contact@florian-lefebvre.dev>