The Promise HTTP/2 Makes
HTTP/2 was designed to solve a well-known problem in HTTP/1.1: inefficient use of connections.
Under HTTP/1.1:
- Browsers open multiple TCP connections per origin
- Requests block each other on a single connection
- Latency increases due to connection overhead
HTTP/2 addresses this by multiplexing multiple streams over one TCP connection. In theory, this eliminates head-of-line blocking at the application layer and improves resource utilization.
The key assumption is that the underlying transport behaves well.
TCP Is Still the Foundation
Despite its name, HTTP/2 does not replace TCP. It runs entirely on top of it.
This matters because TCP has its own rules:
- Ordered delivery
- Congestion control
- Packet retransmission
TCP guarantees correctness, not speed. When packets are lost, TCP slows down aggressively to avoid congestion collapse.
HTTP/2 inherits these behaviors without modification.
What Packet Loss Actually Does to TCP
When TCP detects packet loss:
- The congestion window shrinks
- Transmission rate drops sharply
- Recovery takes multiple round trips
This slowdown affects the entire connection, not just a single request.
In a clean network, this is barely noticeable. In a lossy network—common on mobile or Wi-Fi—this becomes dominant.
Why HTTP/2 Suffers More Than HTTP/1.1
The critical difference lies in connection strategy.
HTTP/1.1 spreads requests across multiple TCP connections. If one connection experiences packet loss:
- Only the requests on that connection slow down
- Other connections continue transmitting
HTTP/2 funnels all requests through one connection. When packet loss occurs:
- Every stream stalls
- All resources wait for TCP recovery
This creates a transport-layer head-of-line blocking effect that HTTP/2 cannot avoid.
Multiplexing Becomes a Liability
Multiplexing is powerful when latency is the main bottleneck. It is fragile when packet loss dominates.
In lossy conditions:
- Small packets (like CSS or JS) are delayed by large transfers
- Priority hints cannot override TCP’s congestion behavior
- Streams are logically independent but physically coupled
The result is that one bad packet delays everything.
Why This Shows Up on Mobile Networks First
Mobile networks frequently experience:
- Variable latency
- Burst packet loss
- Rapid bandwidth changes
These conditions trigger TCP congestion control repeatedly. HTTP/2’s single-connection design amplifies the effect.
This is why users may report:
- Slower initial page load on 4G/5G
- Long “blank” states before rendering
- Worse performance despite fewer requests
The protocol is not broken. The environment violates its assumptions.
TLS Makes the Coupling Tighter
HTTP/2 is almost always used over TLS. This adds another layer of strict ordering.
TLS requires:
- In-order decryption
- Reliable delivery of records
Lost packets delay decryption of subsequent data, even if it belongs to unrelated streams. This further reinforces head-of-line blocking at the transport layer.
Why Servers Can’t Fix This Easily
From the server’s perspective, everything looks correct:
- Streams are multiplexed
- Data is flowing
- No protocol violations
The bottleneck is not in HTTP/2 logic, but in TCP behavior under loss. Servers cannot selectively retransmit or bypass TCP constraints.
Application-level prioritization is powerless once TCP throttles.
Why Benchmarks Often Miss This
Many HTTP/2 benchmarks assume:
- Low packet loss
- Stable latency
- Controlled environments
In these conditions, HTTP/2 performs extremely well.
Real users, however, operate in noisy networks. The performance penalty appears only under conditions that benchmarks often exclude.
This leads to a gap between lab results and field experience.
Why QUIC and HTTP/3 Exist
This exact problem motivated the development of QUIC and HTTP/3.
QUIC:
- Runs over UDP
- Implements stream-level loss recovery
- Allows independent streams to recover separately
A lost packet affects only the stream it belongs to, not the entire connection. This directly addresses HTTP/2’s single-connection weakness.
HTTP/3 is not “faster HTTP/2.” It is a response to TCP’s coupling problem.
Why HTTP/1.1 Sometimes Wins in Practice
On unstable networks:
- Multiple TCP connections provide redundancy
- Loss on one path does not stall others
- Parallelism compensates for inefficiency
HTTP/1.1’s apparent inefficiency becomes accidental robustness.
This is why some sites see performance regressions after enabling HTTP/2 without considering network conditions.
What This Means for Real Systems
HTTP/2 is not universally better. Its performance depends heavily on:
- Packet loss rate
- Network stability
- Resource size distribution
For mobile-heavy audiences or global traffic, blindly enabling HTTP/2 does not guarantee improvement.
Understanding why it slows down allows engineers to make informed decisions rather than treating protocols as magic upgrades.
The Core Lesson
HTTP/2 optimizes for latency under stable transport. TCP optimizes for correctness under loss.
When these goals conflict, correctness wins—and performance suffers.
The slowdown is not a bug, not misconfiguration, and not user imagination. It is an emergent property of protocol layering.