Serving large files — tarballs, ISOs, firmware images, dataset archives — has always required careful attention to caching, range requests, and connection reliability. HTTP/3 changes the transport layer under all of this, and some of those changes directly affect how large downloads behave in practice.
This page covers what actually changes when your download infrastructure moves to QUIC, where the improvements are real, where the edge cases live, and how to configure caching and range-request support correctly for HTTP/3 delivery.
What HTTP/3 changes for large transfers
Stream-level recovery
Under HTTP/2 over TCP, a single dropped packet stalls the entire TCP connection — all multiplexed streams wait for the retransmission. For large downloads sharing a connection with other requests, this means a packet loss event on one stream delays everything.
HTTP/3 over QUIC handles loss recovery per stream. A dropped packet on one stream does not block other streams on the same connection. For a page loading assets alongside a large download, this is a measurable improvement.
For a dedicated single-stream large download, the difference is smaller — you still need retransmission on that stream — but QUIC's loss detection and recovery algorithms (based on RACK-TLP-style mechanisms) are generally faster than TCP's.
Connection migration
QUIC connections survive network changes. If a user starts downloading a 2 GB archive on Wi-Fi, walks out of range, and their device switches to mobile data, the QUIC connection continues without restarting. Under TCP, the connection drops and the client must reconnect and issue a new range request to resume.
This matters most for mobile users downloading large files, and for any scenario where network transitions are common.
0-RTT resumption
QUIC supports 0-RTT connection resumption for repeat connections. For download services where users come back frequently (package repositories, update servers), the reduced handshake latency is noticeable, especially on high-latency links.
Caution with 0-RTT and large downloads: 0-RTT data is replayable. GET requests for static files are safe (idempotent), but be careful if your download endpoint has side effects (counting downloads, rate limiting). Replayed 0-RTT requests could double-count or bypass rate limits.
Range requests under HTTP/3
Range requests (Range: bytes=X-Y) work identically at the HTTP semantics level. Your Accept-Ranges: bytes header, Content-Range responses, and 206 Partial Content status codes are unchanged.
What changes at the transport level:
Resumable downloads
Range-based resume after a connection drop works the same way it always has — the client sends a new request with Range: bytes=<last-received-byte>-. But under QUIC:
- If the QUIC connection is still alive (migration scenario), no range resume is needed — the stream continues
- If the connection is truly lost, the client reconnects and resumes with a range request as before
- QUIC's faster connection establishment means the resume happens sooner than under TCP
Multi-range requests
Multi-range requests (Range: bytes=0-999, 2000-2999) return multipart/byteranges responses. These work fine over HTTP/3, but they are rarely used in practice for large downloads. Most download managers use sequential single-range requests instead.
CDN range-request behaviour
If your CDN sits in front of your origin:
- CDN-to-client: HTTP/3 with range requests works if the CDN supports it (most do)
- CDN-to-origin: The CDN typically speaks HTTP/2 or HTTP/1.1 to your origin. Range requests to origin are converted as needed.
- Cache slicing: Some CDNs (Cloudflare, Akamai) slice large files into chunks for caching. Verify your CDN's slicing behaviour works correctly with your archive sizes — misconfigured slice sizes can cause incomplete downloads or cache fragmentation.
Caching headers for large downloads
Cache configuration is protocol-independent — the same headers work for HTTP/1.1, HTTP/2, and HTTP/3. But getting them right matters more when you are serving large files at scale.
Recommended headers for versioned archives
For files that are immutable once published (release tarballs, versioned packages):
Cache-Control: public, max-age=31536000, immutable
ETag: "sha256-<hash>"
Content-Type: application/gzip
Accept-Ranges: bytes
Content-Length: 157286400
The immutable directive tells the browser not to revalidate during navigation, eliminating conditional requests for files that will never change.
Recommended headers for mutable downloads
For files that might be updated at the same URL (latest-release links, nightly builds):
Cache-Control: public, max-age=3600, must-revalidate
ETag: "sha256-<hash>"
Last-Modified: Sat, 08 Mar 2026 12:00:00 GMT
Accept-Ranges: bytes
Content-Length: 157286400
must-revalidate ensures stale cache entries are checked before serving. Combined with a strong ETag, this gives you correct content without full re-downloads when the file hasn't changed.
Content-Length and range requests
Always include Content-Length for large files. Without it:
- Range requests may not work correctly on some clients
- Download managers cannot show progress or estimate completion
- CDNs may not cache the response correctly
Common mistakes with large downloads
Missing Accept-Ranges: bytes header. Without this, clients don't know they can issue range requests. Download managers will start from zero on every retry.
Using no-cache when you mean must-revalidate. no-cache means "always revalidate before using." must-revalidate means "revalidate only when stale." For large files with a reasonable max-age, must-revalidate is almost always what you want.
Weak ETags on large files. Use strong ETags (content-hash-based) for large files. Weak ETags (W/"...") cannot be used with range requests per the HTTP specification.
Ignoring QUIC's UDP-based nature for very large transfers. Some network paths have lower UDP throughput caps than TCP (certain ISPs, corporate proxies). If you see consistently lower throughput for large downloads over HTTP/3 compared to HTTP/2, check for UDP throttling on the network path.
Not testing range resume after connection migration. QUIC connection migration is great, but test what happens when migration fails and the client has to do a clean reconnect and range resume. Ensure your server returns correct Content-Range responses for all valid byte ranges.
Verification
- Test range request support:
curl -r 0-1023 -o /dev/null -w "%{http_code}" https://example.com/archive.tar.gzshould return206 - Test HTTP/3 delivery:
curl --http3 -I https://example.com/archive.tar.gzand verifyHTTP/3 200with correctContent-Length - Test resume: Start a download, interrupt it, resume with
curl -C - -O https://example.com/archive.tar.gz - Check CDN cache status: Verify the CDN's cache-status header shows
HITfor repeated requests to the same archive - Measure throughput: Compare download speeds over h2 vs h3 for a large file from a representative network
When this doesn't apply
- Tiny files (< 1 MB): Range requests and connection migration add no practical value
- Streaming content (video/audio): Use HLS/DASH with their own chunking mechanisms rather than raw range requests
- Local network transfers: QUIC overhead is not justified on low-latency LAN links
Related reading on wplus.net
- HTTP/3 Hosting Checklist for 2026 — QUIC rollout and monitoring fundamentals
- Hosting hub — hosting architecture overview
- Infrastructure hub — CDN and serving configuration