How Is Image Dithering Computed?

  • Thread starter Thread starter peter.ell
  • Start date Start date
  • Tags Tags
    Image
AI Thread Summary
Dithering is a technique used in digital imaging to create the illusion of a wider color range from a limited palette. It works by diffusing quantization error—essentially the difference between the original color and the closest available color—across neighboring pixels. This process ensures that the average color in a given area closely resembles the intended color. For example, when representing purple (which is not directly available in a palette limited to red and blue), dithering spreads the red and blue values throughout the image, allowing the viewer to perceive purple. The Floyd–Steinberg dithering algorithm specifically diffuses this error to neighboring pixels in a calculated manner, enhancing the visual quality of the image despite the limited color options. This method effectively utilizes the concept of color spaces to find the nearest color representation within the constraints of the palette.
peter.ell
Messages
42
Reaction score
0
After reading about how dithering works for created the sense of a larger range of colors from a small color palette, it makes sense how it works, but how in the world do computers figure out how to dither an image so that it looks correct to us?

Is the fact that blue and red combined create the sense of purple just programmed, or how does it come about so that an image capture with no dithering gets dithered to correctly give the impression of certain colors?

Thank you so much!
 
Technology news on Phys.org
Colors have their own "space", so given a limited palette you can find the closest color just like you'd find the closest point in any other space.

What dithering does is diffuse the quantization error (how far off it is from the original) of a pixel to the neighboring pixels so that the average over an area of the image remains close to the original.

So for example Floyd–Steinberg diffuses the error to neighboring pixels like this:
[0,0,0]
[0,0,a]
[b,c,d]

Where a = 7/16, b = 3/16, c = 5/16, d = 1/16.

You'll also notice that it only diffuses the error to the bottom right, thus leaving already quantized pixels alone when you process the image from left-to-right and top-to-bottom.

Take the example of purple being dithered to a palette containing only Red and Blue:
Purple Value: (255,0,255)

By definition purple is a mixture of red and blue. Our palette doesn't have (255,0,255) so a single pixel can not represent it exactly, so we diffuse Red (255,0,0) and Blue (0,0,255) throughout space so that on average it comes out looking purple.
 
Last edited:
DavidSnider said:
Colors have their own "space", so given a limited palette you can find the closest color just like you'd find the closest point in any other space.

What dithering does is diffuse the quantization error (how far off it is from the original) of a pixel to the neighboring pixels so that the average over an area of the image remains close to the original.

So for example Floyd–Steinberg diffuses the error to neighboring pixels like this:
[0,0,0]
[0,0,a]
[b,c,d]

Where a = 7/16, b = 3/16, c = 5/16, d = 1/16.

You'll also notice that it only diffuses the error to the bottom right, thus leaving already quantized pixels alone when you process the image from left-to-right and top-to-bottom.

Take the example of purple being dithered to a palette containing only Red and Blue:
Purple Value: (255,0,255)

By definition purple is a mixture of red and blue. Our palette doesn't have (255,0,255) so a single pixel can not represent it exactly, so we diffuse Red (255,0,0) and Blue (0,0,255) throughout space so that on average it comes out looking purple.

Thank you. I appreciate your answer, it was very helpful.

All the best!
 
Thread 'Star maps using Blender'
Blender just recently dropped a new version, 4.5(with 5.0 on the horizon), and within it was a new feature for which I immediately thought of a use for. The new feature was a .csv importer for Geometry nodes. Geometry nodes are a method of modelling that uses a node tree to create 3D models which offers more flexibility than straight modeling does. The .csv importer node allows you to bring in a .csv file and use the data in it to control aspects of your model. So for example, if you...
I tried a web search "the loss of programming ", and found an article saying that all aspects of writing, developing, and testing software programs will one day all be handled through artificial intelligence. One must wonder then, who is responsible. WHO is responsible for any problems, bugs, deficiencies, or whatever malfunctions which the programs make their users endure? Things may work wrong however the "wrong" happens. AI needs to fix the problems for the users. Any way to...
Back
Top