Debugging ERR_HTTP2_PROTOCOL_ERROR Behind Nginx Reverse Proxy

The Symptom: A Cryptic Browser Error

You’re browsing your site, everything works fine, and then… one specific page throws this gem in your browser console:

GET https://example.com/some-large-page net::ERR_HTTP2_PROTOCOL_ERROR 200 (OK)

Wait, what? Status 200 OK, but protocol error? Welcome to the wonderful world of HTTP/2 buffering issues.


Understanding the Problem

HTTP/2 is more strict than HTTP/1.1 about how data is transmitted. When nginx acts as a reverse proxy, it buffers the upstream response before sending it to the client. If the buffer is too small for the response, nginx may close the connection prematurely, causing the HTTP/2 stream to terminate unexpectedly.

The browser sees the connection drop mid-transfer and reports it as a protocol error – even though the HTTP status was technically 200.

Common Causes

  1. Insufficient proxy buffers – The most common culprit
  2. ModSecurity blocking responses – WAF rules flagging content as suspicious
  3. Backend timeout issues – Slow responses getting cut off
  4. Content-Length mismatch – Response size doesn’t match headers

Diagnostic Steps

Step 1: Check nginx Error Logs

docker logs your-nginx-container --tail=100 2>&1 | grep -E "(error|upstream)"

Look for messages like:

  • upstream prematurely closed connection
  • header already sent while sending to client
  • ModSecurity blocks: Access denied with code 403 (phase 4)

Step 2: Check for ModSecurity Blocks

If you’re running ModSecurity, check for outbound anomaly scores:

grep "phase 4" /var/log/modsec/audit.log | tail -10

A message like this indicates ModSecurity is blocking the response:

ModSecurity: Access denied with code 403 (phase 4).
Matched "Operator `Ge' with parameter `4' against variable `TX:BLOCKING_OUTBOUND_ANOMALY_SCORE'
[msg "Outbound Anomaly Score Exceeded (Total Score: 4)"]

Step 3: Test Response Size

Check how large the problematic response actually is:

curl -sI "https://example.com/problem-page" -o /dev/null -w '%{size_download}\n'

Compare this against your buffer settings.


Solutions

Solution 1: Increase Proxy Buffers

The default nginx proxy buffers are often too small for modern web applications:

location / {
    proxy_pass https://backend;

    1. Increase buffer sizes
    proxy_buffer_size 128k;
    proxy_buffers 8 128k;
    proxy_busy_buffers_size 256k;
}

Buffer sizing guidelines:

Content Type Recommended Buffer Size
Simple HTML pages 4k – 16k
CMS admin panels 32k – 64k
Pages with code/markdown 64k – 128k
Large dynamic content 128k – 256k

Solution 2: Disable ModSecurity Response Scanning

If ModSecurity is blocking legitimate responses (common with technical content containing code):

Option A: Disable response body access globally

1. In modsecurity.conf
SecResponseBodyAccess Off

This keeps request scanning (attack protection) but disables response scanning (which causes most false positives).

Option B: Disable ModSecurity for specific locations

location ^~ /wp-admin/ {
    modsecurity off;
    proxy_pass https://backend;
}

Solution 3: Location-Specific Configuration

For WordPress or similar CMS:

1. Admin area - needs larger buffers and less strict security
location ^~ /wp-admin/ {
    modsecurity off;

    proxy_buffer_size 128k;
    proxy_buffers 8 128k;
    proxy_busy_buffers_size 256k;

    proxy_pass https://backend;
}

1. Frontend - standard settings
location / {
    proxy_buffer_size 32k;
    proxy_buffers 8 32k;
    proxy_busy_buffers_size 64k;

    proxy_pass https://backend;
}

Why Not Just Set Infinite Buffers?

You might think: “Why not just set buffers to 1GB and never worry again?”

Here’s why that’s a bad idea:

1. Memory Consumption

Each connection allocates buffer memory. With the settings:

proxy_buffers 8 128k;  # = 1MB per connection

Now imagine 1000 concurrent connections:

  • 1MB buffers: 1000 × 1MB = 1GB RAM
  • 10MB buffers: 1000 × 10MB = 10GB RAM

Your server would run out of memory under load.

2. DoS Vulnerability

Large buffers make your server vulnerable to slowloris-style attacks. An attacker could:

  1. Open many connections
  2. Request large pages
  3. Read responses very slowly
  4. Exhaust your server’s memory

3. Performance Impact

Larger buffers don’t always mean better performance. nginx is optimized for streaming data efficiently. Oversized buffers can:

  • Increase latency (waiting to fill buffer before sending)
  • Waste memory that could be used for caching
  • Reduce the number of concurrent connections you can handle

4. The Goldilocks Principle

The optimal buffer size is:

  • Large enough to hold your typical responses without disk spillover
  • Small enough to not waste memory on smaller responses
  • Right-sized for your specific application’s needs

Quick Reference: Buffer Calculation

proxy_buffer_size = Maximum response header size (usually 4k-32k)
proxy_buffers = number × size = Total buffering capacity
proxy_busy_buffers_size = Should be ≤ (proxy_buffers × size) / 2

Example for a 500KB typical response:

proxy_buffer_size 32k;           # Headers
proxy_buffers 16 32k;            # 16 × 32k = 512KB total
proxy_busy_buffers_size 64k;     # What nginx can send while still receiving

Testing Your Configuration

After making changes:

1. Test nginx config syntax
nginx -t

1. Reload nginx
nginx -s reload

1. Clear browser cache and test
1. Chrome: Ctrl+Shift+R (or Cmd+Shift+R on Mac)

Monitor your error logs while testing to ensure the issue is resolved.


Conclusion

The ERR_HTTP2_PROTOCOL_ERROR is nginx’s way of saying “I couldn’t buffer this properly.” The fix is usually straightforward:

  1. First: Check if ModSecurity is blocking responses
  2. Then: Increase proxy buffers appropriately
  3. Finally: Test and monitor

Remember: right-size your buffers for your application, don’t just max them out. Your server’s RAM will thank you.


Happy proxying! May your buffers always be just right.

Leave a Reply

Your email address will not be published. Required fields are marked *