I did not write this. This is a cached version since Posterous has been taken down the original by @trojankitten. The only version I can find live is the archived version and didn't feel comfortable linking to that.

The problem

I was browsing Louie Mantia's blog (a really talented designer), and noticed this tutorial about using Photoshop's Spatter filter to remove gradient banding. Of course, it works by accident, since it's used as a simple diffusion displacement filter which shuffles the gradient's pixels enough so the banding disappears. The problem is this is a destructive, inaccurate operation that would only work on large, relatively flat gradients, where the spatial error introduced is less noticeable than the banding. However, any more detailed graphic would be destroyed.

The solution

Photoshop has a far better solution for accurate image work: 16 bit* color mode (Image > Mode > 16 Bits/channel). While in 8 bit mode each RGB channel has 256 possible shades, in 16 bit they are 65536. Perfect for gradients; perfect for heavy editing (some operations reduce the color resolution of your image, ex. if you reduce contrast, then increase it again); perfect for experimenting with extreme blending mode effects (i.e. say Hard Mix) without encountering so many artifacts, as in 8 bit.

There is one drawback of 16 bit mode: some of Photoshop's fancy filters, and many third party filters don't work. That said, all the important ones do work (Blur, Noise, Sharpen, Levels etc.), and as we know the rest are quite specific/gimmicky and thus rarely used for serious work. If you need to apply an 8 bit filter, you can always do so in a separate image or within an 8 bit smart object.

* By the way, you'll notice a 32 bit image mode is also available, but more is not better in this case: 32 bit mode has higher resolution than most images need, even less filters are supported, produces even bigger files, and is primarily intended for high dynamic range (HDR) photos.

The problem with the solution

So, I told Mr. Mantia: try 16 bit mode. The answer: "I've tried this method before, and [banding] still happens." Ok, what's the problem here? The problem is Adobe did half the job with their 16 bit mode. Although your 16 bit image has all the gradient data to avoid any possible banding, that's not what you see on your screen. You see an 8 bit version of the underlying 16 bit data, poorly truncated (I suppose for speed) and showing banding, which isn't there internally. The correct solution? Dithering. Just like converting from RGB image to Indexed image, dithering 16 bit data to 8 bit will provide a smooth 8 bit representation of the 16 bit color shades, to avoid banding.

Adobe naturally thought of that, so they added a checkbox option in the Edit > Color Settings dialog, badly named Use Dither (8-bit/channel images). If you don't see this checkbox, click the button More Options in the same dialog first. This option is enabled by default and it works in some places, while it doesn't work in other places. For example:

And naturally, if you disable this option, it will never dither in any of the scenarios above.

The solution for the problem with the solution

So here's what we'll do. We want to have a banding-free preview, so we see how our final image will look like. We need realtime dithering while we edit our 16 bit document. As a bonus, by doing this we'll gain some additional control over how exactly the image is dithered.

The ultimate source of all that is true and holy, Wikipedia, describes dithering like this:

"Dither is an intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as banding in images."

In other words, dithering is exactly the right amount of error introduced into a higher resolution input, which allows us to perceive the lost information "probabilistically" within a lower resolution output - as a subtle noise pattern that emulates a shade of a color we don't really have in 8 bits. There are different algorithms for producing that intentional error, with different benefits, but the simplest one is to make it random.

Our source space has 65536 shades (or steps) of color per channel, our target space has 256 shades (or steps). So we need to introduce a uniform noise only as strong as one shade to "nudge" the 16 bit data into rasterizing as a dithered 8 bit image without introducing any additional unneeded noise.

Confused? Ok here's a simpler version: we'll put a layer with noise on top of our image, and make that layer almost transparent. (Sudden silence in the audience). This... will be equivalent to Photoshop's own 16-to-8 bit dithering. Really. No, I'm not kidding. It won't just add noise and ruin the image. It'll work, trust me. 

Practice

Important: Gradients in Photoshop have an option which is on by default, called Dither (select the Gradient tool and see on the tool properties bar on top, right next to Reverse and Transparency). This is useful in 8 bit mode, but we don't want any of it interfering with our 16 bit tests, so turn it off before proceeding.

Also important: Creating a gradient in 8 bits and "upconverting it" to 16 bits doesn't work. The extra information simply isn't there. For any of this to work, the graphics must be created and stay in 16 bit space until the very final step of converting to 8 bits for output.

The example 16 bit image is a very subtle dark gray diagonal gradient. In 8 bit space this gradient has only 5 steps (from #232323 to #272727). I realize it's a subtle and extreme case, but it's also very suitable to demonstrate the dithering process. In 16 bit mode, this same gradient will have 5 * 256 = 1280 steps. Plenty, isn't it? 

So let's see what Photoshop's dithering does, as a reference. Enable Use Dither in Edit > Color Settings if you need to, and convert this image to 8 bits (Image > Mode > 8 Bits/channel). The results are below (click to zoom... god damn it, Posterous):

Original file lost

On a properly calibrated monitor you can count 5 diagonal bands of dark gray color in the first image and see the apparently smoother shade once it's dithered to 8 bits in the second one. But since you may not have a properly calibrated monitor, plus I want to make it easy for you feeble humans, the second pair of images on right blows up the contrast on the same two images, so you see better what really happens.

Notice this dithering algorithm isn't like any of the ones Photoshop offers in the Indexed mode conversion dialog (its closest equivalent is Noise, but it's not identical). One reason is Photoshop in general must have at least 3 completely different implementations of everything it implements (it is the law), but another is that the differences in shades in 8 bit is subtle enough that the particular method doesn't matter, as long as it avoids the banding. What you see is a completely random distribution of the quantization error, which we can easily emulate with a layer full of white noise. This method also survives scaling/rotation better than other ones (like Pattern), which is a bonus, given how easy it is to forget we have any dithering in the image at all (it's subtle).

Ok, so that's that with Photoshop's dithering. Now turn it off (Edit > Color Settings uncheck Use Dithering), because we're going to make our own dithering "engine." Here's the recipe for the magic dithering layer which you should create in your 16 bit image:

  1. Create a new layer (should be topmost).
  2. Fill it with solid 50% gray.
  3. Make some noise: Filter > Noise > Add Noise. The settings are Amount = 25%; Distribution = Uniform; Monochromatic = checked. 
  4. Now hit Command+F (Ctrl+F on Windows) to repeat the noise filter on the same layer (if you're curious why, watch the layer's histogram as you perform the noise filter the first and second time).
  5. Set the layer to Linear Light. A great blending mode for this purpose. The darker half of the shades is implemented as inversed linear subtraction (like Linear Burn), and the lighter half as linear addition (like Linear Dodge).
  6. In the Layers palette, set the magic values for the layer Opacity to 1% and the Fill opacity to 19%. You could (but don't, really) read the small note below about all the gory details*.
  7. We're done. Now lock the layer (the padlock icon in the Layers palette) so it won't react to mouse input. You can forget it exists.

* We want just enough noise to expose the hidden 16 bit details, but not even a bit more. One shade. Photoshop, unfortunately doesn't provide very fine opacity control, however it has two opacity controls which act together (they multiply each other): Layer Opacity (Opacity in the Layers palette) and Fill Opacity (Fill in the Layers palette). They both affect the layer fill, and with both we can approximate the desired fraction, which is 1/256. Linear Light, however is a mixed blending mode which normalizes channel values 0-127 and 128-255 to 0-255 and applies different blending modes to each group. Or in other words, due to this normalization it's "twice stronger," so our target must be twice smaller to compensate, or 1/512. The combined opacity values are: 1% * 19.531% = ~1/512.

Here's the result for the same 5 step gradient (click to zoom):

Original file lost

Notice that the image in 16 bit and the downconverted 8 bit version look identical. In other words, we now have real time what-you-see-is-what-you-get dithering in 16 bits, which smooths the scary banding.

Important: You need to flatten the image (including the noise layer) while in 16 bit mode and only then convert to 8 bit mode for export. If you do it the other way around, you'll lose all 16 bit information first, and then the noise layer will do precisely nothing in 8 bits. Also, mind Photoshop's Use Dither setting. You can use either the layer or the native dithering method, but don't use both during conversion, it results in twice the noise you need.

Well, now that we have the basics clear (we... do have the basics clear, I hope...) we can play a bit. Since we control the contents of the noise layer, which in turn controls the dithering process, we can do plenty of interesting things with it (click to zoom):

Original file lost

Control the dithering amount: if you tweak the Fill Opacity of the noise layer, you control the dithering amount, just like the Amount options when converting to Indexed mode. Don't set it too high though, it'll just add useless noise to your image. However, less dithering means more flat areas, which means smaller GIF and PNG images during export.

RGB dithering: If you generate your noise with the Monochromatic option off, the result is RGB dithering. If you compare with the original, you'll notice dithering independently on every channel produces somewhat smoother results, but adds some false color pixels to the image (which average to neutral, however). Since 16-to-8 dithering is already barely visible this probably has no practical use. But I did it. Because I can.

Scanline and pattern dithering: A 2-step or a 3-step scanline (black, 50% gray and white lines in the dithering layer) and the 4-step pattern, as demonstrated above, still multiply the apparent color resolution of your image 3 and 4 times, respectively, however without adding so much fine-grained noise. Such patterns actually, are interesting in practice, because while they work well enough to remove most of the worst banding (thus producing smooth looking gradients), they produce much smaller images, compared to regular dithering. Now you no longer have to use a JPEG or a huge PNG noise dithered image for your large site gradient backdrop. Just try pattern or scanline dithering. Oh, and if your gradient is vertical, your scanline needs to be vertical too for the trick to work best.

Good luck.