a companion discussion area for blog.codinghorror.com

Zopfli Optimization: Literally Free Bandwidth


In 2007 I wrote about using PNGout to produce amazingly small PNG images. I refer to this topic frequently, as seven years later, the average PNG I encounter on the Internet is still very unlikely to be optimized.

This is a companion discussion topic for the original entry at http://blog.codinghorror.com/zopfli-optimization-literally-free-bandwidth/


for your profile images, where it’s an ascii character on a square, mono colour background I’d be interested to see how something like SVG would stack up size-wise against PNG


@codinghorror you should also check out freeware app called http://www.irfanview.com and it’s plugins which include pngout etc. I have been using it for nearly 10yrs and really crunches PNG images down by alot !

I took your blog comic image and used irfanview to reduce a true colour 32bit version lossless version at 561KB and a 256bit colour optimised version down to 136KB - results at https://community.centminmod.com/threads/zopfli-optimization-literally-free-bandwidth.5516/ :slightly_smiling:

very useful for saving bandwidth :sunglasses:


You could try a tool like ImageOptim (https://imageoptim.com), that tries a bunch of compression tricks, including Zopfli, to see what gets the best results. It will even do sequences of decompression. For most of the images in sites I build, what it usually settles on is PNGout + Zopfli.


Second the motion on ImageOptim, an excellent tool. For your default avatar images, you can also pngquant them to save more, because you’ve basically got two shades plus the intermediates for anti-aliasing. You can go to 16 colors with no problems, and nobody is going to notice if you go to 8 – you can barely see the differences A/B comparing zoomed versions.

The pngquant 8 + imageOptim version of your A avatar goes from 1542 to 1174 bytes.

One nice thing about pngquant is that it has a quality=100 setting that won’t change the image unless it can reduce the color space losslessly. So you can insert it into the chain and occasionally get a nice win.

For example, eva2000’s avatar goes from 2958 to 2277 using ImageOptim (PNGOUT+Zopfli) but 1610 with pngquant quality=100 done first.


@MadOverlord that reminds me i also use pngnq-s9 on my Nginx HTTP/2 World Flags Demo site

pngnq-s9 is a modified version of pngnq, the neural network colour quantizer for png images.

Like pngnq, pngnq-s9 takes a full 32 bit RGBA png image, selects a palette of up to 256 colours, and then redraws the image in 8 bit indexed mode. The resulting image can be up to 70% smaller than the original.

pngnq-s9 adds several new options to pngnq including the ability to augment a user-supplied palette, the ability to quantize in the YUV colour space, and the ability to give more or less weight to specific colour components when quantizing. The program also includes a few bug fixes relative to the most recent version of pngnq.


I’m no CSS wizard, but this seems to work just fine:

<!doctype html>
        <title>Discourse avatars</title>

        <link href='https://fonts.googleapis.com/css?family=Roboto' rel='stylesheet' type='text/css'>

            div.avatar {
                width: 128px;
                height: 128px;
                background: #80c080;
                color: #d0ffd0;
                text-align: center;
                border-radius: 50%;
            div.avatar span {
                font-family: 'Roboto', sans-serif;
                font-size: 96px;
                line-height: 128px;
        <div class="avatar"><span>S</span></div>

Chances are, the font will already be cached in the client. The CSS and markup aspect of the above code amount to a little over 300 bytes.

The image is currently sent at 240x240px, but scaled down to 128x128 in the client. If there are scenarios in which the full 240px version is rendered, there’s advantage to only having a single size for a resource, but sending it larger than anyone will ever see it is a waste of bytes.


The problem with HTML/CSS and SVG avatars is they completely fail in email and a bunch of other places where a PNG image works perfectly… they also are a hellscape of crazy tweaky font alignment issues per browser. You can see the disussion at

As for further reducing color depth of the avatars, in my testing with ImageMagick reducing color depth, 128 colors worked best:

3,929 200-second-attempt-256.png
1,764 200-second-attempt-128.png
1,764 200-second-attempt-64.png
1,698 200-second-attempt-32.png
1,623 200-second-attempt-16.png
1,122 200-second-attempt-8.png

There was (almost) no difference in file size for 16 and 32 and 64 colors. And even 64 colors isn’t enough gradations. It won’t cause dithering, per se,

But even reducing to 64 colors (with virtually no file size savings) produces a worse avatar letter since the edge gradations are strongly affected.

Reducing to 8 colors does bump the file size down a fair bit, but that’s an extreme. You would absolutely notice only 8 6 gradations in color between the letter (one color) and the background (another color).

That’s why 128 colors was the sweet spot – big file size savings, with zero impact on image quality.

We do generate multiple different resolutions for any given avatar @Chris_JL although given the presence of retina and higher resolution devices, sometimes it’s better to use the higher resolution image.


You might also be interested in the new one, brotli:

Although it is designed specifically for plain text it produces better compression rates then zopfli, with time close to zlib deflate. Already implemented in FF nightly.


@dentuzhik indeed come later this month Firefox 44 should have brotli support

Nginx folks can use ngx_brotli module https://github.com/google/ngx_brotli. I am using it with my beta version of my Centmin Mod LEMP stack on my forums right now for testing.

webpagetest shows page text renders quicker at least visually


IE11 looks pretty unhappy in that timeline!


yeah IE11 doesn’t support HTTP/2 unless on Windows 10 OS hehe http://caniuse.com/#search=http2


Do you have stats about IE/Edge on Windows 10?


you mean webpagetest results ? i don’t believe any of webpagetest.org’s test locations have Edge/Win10 setups as they are all Linux test locations IIRC.


For some more fun, try my tool Precomp. Using it without additional parameters compresses the PBF image to 533,052 bytes. No, the resulting file is not a viewable PNG file, but using “precomp -r” you get the original file back! Not only is the image content compressed lossless, it also stores information to restore the original file - now that’s lossless :slightly_smiling: .

Essentially, the result file is a bZip2 compressed version of the image content together with additional information to restore the original compression. You can also use “precomp -cn” for a decompressed-only version you can feed to your favorite compression program, e.g. if you prefer 7-Zip. With compressors from the PAQ family like ZPAQ, the PBF image can be compressed down to 440 KB.

This works for many other filetypes that contain deflate streams like PDF and ZIP or even some Linux distribution images, and also handles GIF and JPG files using specialized routines.

It has the same catch you mentioned in your post - slow compression, fast decompression. Decompression speed is not the same as for the original PNG, but still very fast. But as a browser add-on or built-in, it could save even more bandwidth than Zopfli does.

There hasn’t been much progress in the project since 2012, but I’ll make it open source soon and there even is an open source alternative on GitHub called antiz.


A good reminder is that DEFLATE streams tend to crop up all over the place - a common one is the venerable .zip file. Take the SysinternalsSuite.zip as an example: 15,160,701 bytes served directly from Microsoft. Taking about 3 minutes to recompress it with the excellent advzip utility reduces it to 14,597,826 bytes - more than 500K in savings!

Also worth noting, the Zopfli algorithm can also specify the “strength” or iterations used in searching. It tends to scale pretty linearly in time but that isn’t exactly a good thing when you are already talking minutes for basic compression. For many files it also doesn’t actually increase compression, or does so by very few bytes. Still something to consider for the hard-core nerd that doesn’t mind letting a CPU burn all night long for the ultimate in broadly compatible compression.


Just out of curiosity-- have you tried JPEG2000 compression? I know, a PNG-> JPEG2000 step is kind of a canonically inelegant way of doing things, but the results might be interesting. For one thing, It would give a benchmark on how well you can do with image compression on your test image. And lossy JPEG2000 compression works very well too.


No, lossy compression is a very different animal.


I know that. As a matter of fact, for JPEG2000, lossy compression is a floating point algorithm and lossless compression is an integer algorithm, so they are fundamentally different. I’m just saying that lossy compression in JPEG2000 works well-- generally avoids those nasty artifacts.


The point is that with lossless compression there are zero artifacts.

It is true that reducing color depth with PNG is a very brute force method of lossy compression, however in the only cases I recommend color depth reduction the image can be accurately represented with only (n) colors, as shown in the monochrome avatars example in my blog post.