Lossless optimization of png's can achieve reasonable savings, however, substantial compression gains can be made with very slight lossy processing (VSLP). Increasingly png images are produced from vector artwork or touched up in photo editors, introducing common artefacts that bloat up png files but add minimal visual information. Principally I'm talking about excessive palette data and in this context I use this as short-hand for all 4 channels: RGB and (most significantly) alpha. Minimizing this obviously reduces the palette size and may allow small images to be saved with just a 256 palette at only 8bits per pixel (vs. 32bpp). But reducing the palette also greatly improves the compressibility of the file, since we have larger areas of equal colour and alpha values, which is nice for the pre-filters and deflate to crunch on. There are many programs out there for producing lossy pngs, but they tend to discard excessive information and the results are visually obvious. To clarify, I'm talking about removing redundant data that will not be visible in the vast majority of situations (hence VSLP).
Removing this data by hand is pretty time consuming, so I hope the process can be automated by using ImageMagick or some custom program (any volunteers?). I've not had much success using this software, so the following is my list of Tweaks for VSLP which someone might want to script. Note, the first assumption (and significant saving) is that we will only deal with pngs using 8 bits per channel; all 16pbc images will be downsampled. The following convention is used: (Suggested variable name and [default value]) ...
1. Tweak alpha transparency data.
It's the decent alpha capabilities that we all love about pngs, but easy gains with least visual impact can be made by tweaking here. The human eye is not so good at judging transparency and yet we preserve high fidelity when scaling and re-touching images in photo editing software.
I think reducing alpha data should be the first target, with highest yield vs visual impact.
1.1. Transparent threshold (alpha_threshold_trans [2%])
Below a certain level of transparency the human eye cannot pick up on the subtle effect on the background colour, so we can set that alpha value to fully transparent. This is clearly dependent on the contrast between the background and foreground colours being blended. I have found that below 2% (i.e. A<=5, for 0-255) represents a reasonable threshold point. Below this threshold we set A=0, above the threshold we leave A=A. Example: Scaling an image with full alpha from 128 to 32 pixels will produce many pixels with an A=1 which is not visible or desireable, but may significantly hurt compression.
1.2. Opaque threshold (alpha_threshold_opaq [98%])
An identical case as above can be made for the threshold at which pixels appear effectively opaque. Again from experience, it would seem that near to 2% fully opaque there is little point preserving alpha data (i.e. A>=250, for 0-255) . Above this threshold we set A=255, below A=A. Example as above.
1.3. Transparent colour (alpha_color [1])
Thresholding alpha values may well reduce the palette and transparency tables, but it yields an additional benefit. Often images contain redundant colour data in fully (/highly) transparent regions which may be useful during editing but are not required for the finished image. This can be removed by setting all fully transparent pixels (A=0 after thresholding), to a common background colour. Sensible colour options are: [0] leave colours as they are; [1] set to black [RGB=0,0,0]- which is highly efficient for most images; [2] Pick the most highly used colour in the image (often white or black).
I have numerous other VSLP tweaks that I routinely apply, and will write them up in time. Cheers.
Saturday, June 23, 2007
Subscribe to:
Comments (Atom)