Tim Bray posts about How to Send Data and asks, “if you’re sending anything across the Net, why would you ever send it uncompressed?” Mostly because it is a lot messier than it should be and the payoff is small. I’ll survey the problems we ran into when we added HTTP compression to Ultraseek.
Tim also brings up encryption. That has many of the same problems, but the payoff is much, much bigger, so it is usually worth the hassle.
If you can store your content compressed, some of these problems go away, but not all. Compressing on the fly is often not worth the bother.
Algorithm Compatibility: The spec lists three standard compression algorithms: compress, deflate and gzip. Compress isn’t as effective and browsers implement deflate in two incompatible ways, so the first step is to only send gzip. With gzip, you still have to decide on a compression level.
Keep-alive: For HTTP keep-alive, you need to specify the content length in the HTTP header. But with compression, you don’t know that length until after the compression, so you can’t send the header to the client until the compression is finished. This can add substantial delay. You avoid this by using chunked transfer coding, an additional complexity.
Server-side Latency: A great trick for responsive servers is to push content out the socket as soon as you have it. This is especially important if the content takes a while to generate. In our case, you can list all the URLs the spider knows about for a site. This can take a while. So, flush out the template HTML, then flush every N list items. If your content compresses really well (an HTML list of URLs may see 10X compression), then you have a choice of pushing out short packets or making the customer wait. Either way, compression has not improved the user-visible performance.
TCP Latency: If the latency is dominated by network round-trips or new connections, compression won’t help much. New connections go through TCP slow start, so reducing your page from six packets to four won’t eliminate a single round trip. Slow start doubles the outstanding packets for each round trip, so you have 1, 2, 4, … in transit. One RTT for one packet sent, two for three, three for seven, until you hit the max in-transit buffer size.
Browser Compatibility: The deflate algorithm mess is one source of browser incompatibility, but there are also older browsers that implement compression badly or only recognize “x-gzip” in the response headers. A really robust implementation may need to check the user agent before sending compressed responses.
Compressed Formats: Compressing an already-compressed format is a complete waste of time, so you need to make sure to not compress JPEGs, zip archives, etc.
Hard to Measure: Good performance measures for this need a range of tests over different real networks with varying bandwidth/delay properties. In our tests, we could not demonstrate conclusive improvement. But it didn’t hurt, so we leave it turned on.
Way back in 1997, the LAN-based tests of HTTP compression showed small improvements, around 15-25%. That is not a meaningful different for user interface, and maybe not for net utilization. If there is any increase in latency to start rendering the page, that will be a big loss for responsiveness.
Hi Walter, not many comments on your blog. My favorite compression trick, apparently not applicable to HTTP compression, is to compress about 100 gigabytes of zeroes and store it with a name like BritneyNekkid.bz.
LikeLike