LowEndBox - Cheap VPS, Hosting and Dedicated Server Deals

One Third of the Web Will Stop Working in 4 Days: Massive-Scale CDN Compromise Starts Wednesday

HTTP SmugglingThe title to this post is sure alarming, isn’t it?  It’s taken from this summary:

Upstream HTTP/1.1 is inherently insecure and consistently exposes millions of websites to hostile takeover. Six years after we exposed the threat of HTTP desync attacks, there’s still no end in sight. On August 6, James Kettle from PortSwigger Research will reveal new classes of desync attack that enabled him to compromise multiple CDNs and kick off the desync endgame.

What the heck are we talking about here?

Request Smuggling

I first ran into this looming threat because I’m an avid reader of Ted Unangst’s blog, flak.  Ted (aka tedu) is an OpenBSD developer who posts all kinds of interesting things about programming, security, and golang, with his characteristic wit and word-efficient humor.

Ted’s post, “Polarizing Parsers,” observed:

The web as we know it will soon crash and burn in a fiery death. 12 days. There’s even a countdown.

In focus is request smuggling, a clever attack on HTTP/1.1 which happens to still power roughly a third of the web (the rest being HTTP/2 and HTTP/3).  In HTTP/1.1, when a client makes a connection, it sends a header and a body.  The header specifies, among other things, either the length of the body or where the boundaries of the body end.  After all, the server is just getting a stream of bytes.  It needs a map to understand how to parse those bytes.

Most of us are familiar with Content-Length: field.  For example, here is a typical HTTP/1.1 request:

POST /search HTTP/1.1 
Host: www.example.com 
Content-Type: application/x-www-form-urlencoded 
Content-Length: 16

LowEndBox rocks!

The Content-Length: field tells the server that there are 16 bytes in the body.

However, there’s also the Transfer-Encoding field, which tells the server to use chunked encoding. The message body contains one or more “chunks” of data. Each chunk consists of the chunk size in bytes (in hexadecimal), then a newline, and finally the chunk contents. The message is terminated with a chunk of size zero.  This method is typically used where you have a load balancer or other front-end server that is forwarding data to back end servers.

Unfortunately, some servers get confused when you use both methods in the same request.  What can happen is that servers don’t agree where a request ends and where the next one begins.

As Portswigger explains:

POST / HTTP/1.1 
Host: vulnerable-website.com 
Content-Length: 13 
Transfer-Encoding: chunked 

0 

SMUGGLED

The front-end server processes the Content-Length header and determines that the request body is 13 bytes long, up to the end of SMUGGLED. This request is forwarded on to the back-end server.

The back-end server processes the Transfer-Encoding header, and so treats the message body as using chunked encoding. It processes the first chunk, which is stated to be zero length, and so is treated as terminating the request. The following bytes, SMUGGLED, are left unprocessed, and the back-end server will treat these as being the start of the next request in the sequence.

…which can lead to all kinds of mayhem.  For example:

POST /home HTTP/1.1 
Host: vulnerable-website.com 
Content-Type: application/x-www-form-urlencoded 
Content-Length: 62 
Transfer-Encoding: chunked

 0 
GET /admin HTTP/1.1 
Host: vulnerable-website.com 
Foo: xGET /home 
HTTP/1.1 Host: vulnerable-website.com

The front-end server sees two requests here, both for /home, and so the requests are forwarded to the back-end server. However, the back-end server sees one request for /home and one request for /admin. It assumes (as always) that the requests have passed through the front-end controls, and so grants access to the restricted URL.

To be clear, the problem is not with the HTTP/1.1 protocol.  Or as Ted more colorfully puts it:

So what is Akamai (it’s always Akamai) doing that their proxy is putting invalid requests on the wire? Why are we blaming the protocol here, when it’s clear (to me) that the error is the proxy that sends invalid requests? If you put crap on the wire, that’s bad. If your supposed web firewall is the one putting crap on the wire, that’s really bad. Yes, someone somewhere has to deal with the crap input, but why is anything in your stack generating crap? That’s just deranged.

This will be an interesting one to watch.

1 Comment

  1. Hi, I’m the author of this research. It’s great to see interest and I can promise some quality research and a strong argument to kill HTTP/1.1 but the headline of this article goes a bit too far. The specific CDN vulnerabilities have been disclosed to the vendors and patched (hence the past tense in the abstract) – I wouldn’t drop zero day on a CDN! That said I do expect to see fresh critical CDN vulnerabilities in future – hopefully found by a white hat!

    August 3, 2025 @ 6:00 am | Reply

Leave a Reply

Some notes on commenting on LowEndBox:

  • Do not use LowEndBox for support issues. Go to your hosting provider and issue a ticket there. Coming here saying "my VPS is down, what do I do?!" will only have your comments removed.
  • Akismet is used for spam detection. Some comments may be held temporarily for manual approval.
  • Use <pre>...</pre> to quote the output from your terminal/console, or consider using a pastebin service.

Your email address will not be published. Required fields are marked *