Exploiting the Floyd-Steinberg Algorithm for Dithering Images in R

justinkraaijenbrink
Analytics Vidhya
Published in
8 min readJan 30, 2021

--

image retrieved from rawpixel.com
(statistics == programming)> TRUE

Back in 2019, when I started my Master’s degree in Statistical Science at Leiden University, I had no idea that the above expression would ever possibly evaluate to TRUE . With a background in Educational Sciences, I always felt like I was a real programmer when editing some code in the syntax window of SPSS. Well… nope. In my first semester I enrolled for the course ‘Statistical Computing with R’, in which I learned the fundamentals of statistical programming. The classes turned out to be among the most useful ones I ever attended, in which I truly developed some love for R as a (statistical) programming language. Now, barely one and a half year later, I write my first article on Medium — not entirely accidentally — about the Floyd-Steinberg algorithm for dithering images: a tribute to my first real programming assignment!

Image dithering: a short explanation

Alright, image dithering. Chances are that you have never heard of it. Although it is not the most catchy subject for a first post on Medium, the name is quite cool and so is the working algorithm. It also has some fairly nice underlying statistical concepts, which I will discuss in part II of this blogpost. So, let’s dive into the colorful world of image dithering!

Dithering is used to create the illusion of ‘color depth’ in images with a limited color palette — a technique also known as color quantization ~ Wikipedia

In other words: we can use dithering to make a picture still look wonderful despite using less colors. This is done by fooling our brains in some sense. To unravel the dazzling trick, it is insightful to have some basic knowledge about how pictures and the standard RGB-color palette. Fun fact: dithered images are often recognized by having small dots or points. Just like Van Gogh’s astonishing Straw Hat!

RGB-color palette

An image is made up of pixels and each pixel can be represented by a vector with three different values: one value for the intensity of Red, one value for the intensity of Green and one value for the intensity of Blue. Say we have a picture of 480×480 = 230,400 pixels and imagine that the upper left pixel is represented by the vector x₁,₁ = {0, 0, 0}. Now, this single pixel will be black, since all colors are set to lowest intensity. Another pixel, for example the upper right one, might be represented by x₁,₄₈₀ = {1, 0, 0}, resulting in a pixel as red as Rudolph’s nose.
Using the full color palette gives us 256³ = 16,777,216 different colors: that sounds like a huge overkill, and way too much to be discriminated by the human eye. Furthermore, it makes our image unnecessarily large! Here is such an image (1.2MB)

Such a cutie! A smaller color palette should also do fine in to show this little boy, but what if we use only a three bit color system? Three bit means 2³ = 8 colors, or in other words: each R-, G- or B-value in the pixel vector is either turned off (255) or on (0). The result:

Hmm, seems like we lost a tremendous amount of information here. Luckily, by now we at least know that there exists something like dithering which should help us to let Quokka Quinten look good, even with a drastically smaller color palette! Thanks to Robert W. Floyd and Louis Steinberg we have a magical algorithm that does the job.

Floyd-Steinberg algorithm

Floyd and Steinberg presented their ideas back in 1976¹ and in fact they found a way to fool the human mind by making use of a principle called error diffusion. For a detailed explanation I refer to the Wikipedia page or to this great explanatory video by The Coding Train.

In short, the algorithm works as follows:

As can be seen from the pseudocode, the error is specified as the discrepancy between values for the pixel represented with a full color palette, and the pixel values when using the smaller palette. Subsequently, this error is distributed — or diffused — over neighbouring pixels. This truly helps, because we human beings perceive this diffusion as a spotless blend of the available colors in the smaller color palette: we observe more colors than there actually are!

Dithering an image in R

Now we have at least a basic understanding of how image dithering works, it is time to exploit the Floyd-Steinberg algorithm and make Quokka Quinten look way more refined! We start by loading the required packages and reading in the image. Note that the image is 670×946 pixels, so it is represented as an array with three matrices (one for red, green and blue) that are ‘stacked on top of each other’. Since we only need the first three channels, we need only select these ones.

library("tidyverse")
library("png")
quokka <- png::readPNG(source = "0_img/quokka.png")
quokka <- quokka[, , 1:3]

The next step is to program a function that creates a smaller color palette. I like to be able to create color palettes of different sizes, such as eight colors (3-bit), 512 colors (9-bit) or even 32,768 colors (15-bit). The function therefore depends on K, which is either the number of colors we want to have in our palette, or the bit-size. I have built in some simple checks, but it is no crime if you omit them.

CreatePalette <- function(K){ if (K %in% c(8, 64, 512, 4096, 32768) == FALSE &&
K %in% c(3, 6, 9, 12, 15) == FALSE) {
stop("Input K must be a one of these numbers:
(8, 64, 512, 4096, 32768) or (3, 6, 9, 12, 15)")
}

if (K %in% c(3, 6, 9, 12, 15) == FALSE & log2(K) %% 3 != 0) {
stop("Your input must satisfy the following condition:
log2(K) %% 3 == 0")
}
if (K %in% c(8, 64, 512, 4096, 32768)) {
K = K^(1/3)
}

if (K %in% c(3, 6, 9, 12, 15)) {
K = 2^(K/3)
}
color_combs <- expand.grid(seq(0, 255, 255/(K - 1)),
seq(0, 255, 255/(K - 1)),
seq(0, 255, 255/(K - 1)))

colnames(color_combs) <- c("Red", "Green", "Blue")

all_colors <- apply(color_combs, 1, function(x){
red <- x[1]
green <- x[2]
blue <- x[3]
col <- rgb(red, green, blue, maxValue = 255)
})
out <- list(cols = all_colors,
dat = color_combs)

return(out)
}

The first two if statements are there to check whether we provide a valid input. The next two if statements make sure that K is converted to the number of values a single entry of the pixel vector can take. Say we want to create a 3-bit color palette. This means that we have eight different RGB combinations:

Note that we use R’s built-in rgb function, which converts pixel intensities to an actual color in hexadecimal code. Don’t panic, this is just another way to represent colors — as was the RGB color scheme!

The last step before we can actually apply the Floyd-Steinberg algorithm is to write a function that finds for each pixel its the ‘closest’ color in the smaller palette, where closeness is defined as:

where cₖ = {Redₖ, Greenₖ, Blueₖ} for k ∈ {1,…, K} is a particular color from the reduced color palette and xᵢⱼ is the vector with intensities for the pixel we consider. In less technical terms: the closest color is the color with smallest squared Euclidean distance! With the function FindClosestPaletteColor, we can replace each pixel with its closest color in the smaller palette:

FindClosestPaletteColor <- function(picture, n_bit, maxValue = 255){ if (maxValue != 1 & maxValue != 255) {
stop("maxValue should be either 1 or 255")
}

if (maxValue == 1) {
picture <- picture * 255
}

col_palette <- CreatePalette(n_bit)$dat

closestpixel <- array(dim = dim(picture))

for (i in 1:nrow(picture)) {
for (j in 1:ncol(picture)) {
pixel <- picture[i, j, ]
ind <- which.min(rowSums(t(pixel - t(col_palette))^2))

closestpixel[i, j, ] <- as.numeric(col_palette[ind, ])
}
}

return(closestpixel)
}

So far, we have prepared all ingredients to be able to smoothly process the ideas of Floyd and Steinberg into a working algorithm. Bon appétit!

DitherImage <- function(picture, n_bit) {

pixels <- picture
quant_error <- array(dim = dim(picture))
col_palette <- CreatePalette(n_bit)$dat/255

for(i in 1:(nrow(pixels) - 1)){
for(j in 1:(ncol(pixels) - 1)){
oldpixel <- pixels[i, j, ]
newpixel <- Closest(oldpixel, col_palette)
pixels[i, j, ] <- newpixel
quant_error[i, j, ] <- oldpixel - newpixel

pixels[i , j + 1, ] <- pixels[i, j + 1, ] +
quant_error[i, j, ] * 7 / 16
pixels[i + 1, j + 1, ] <- pixels[i + 1, j + 1, ] +
quant_error[i, j, ] * 1 / 16
pixels[i + 1, j , ] <- pixels[i + 1, j , ] +
quant_error[i, j, ] * 5 / 16
pixels[i + 1, j - 1, ] <- pixels[i + 1, j - 1, ] +
quant_error[i, j, ] * 3 / 16
}
}
col_oldpixels <- pixels[, ncol(pixels), ]
col_newpixels <- matrix(nrow = nrow(col_oldpixels),
ncol = ncol(col_oldpixels))

for (i in 1:nrow(col_oldpixels)) {
col_newpixels[i, ] <- Closest(col_oldpixels[i, ], col_palette)
}

pixels[, ncol(pixels), ] <- col_newpixels
quant_error[, ncol(pixels), ] <- col_oldpixels - col_newpixels
row_oldpixels <- pixels[nrow(pixels), , ]
row_newpixels <- matrix(nrow = nrow(row_oldpixels),
ncol = ncol(row_oldpixels))

for (i in 1:nrow(row_oldpixels)) {
row_newpixels[i, ] <- Closest(row_oldpixels[i, ], col_palette)
}

pixels[nrow(pixels), , ] <- row_newpixels
quant_error[nrow(pixels), , ] <- row_oldpixels - row_newpixels

out <- list(New_Colors = pixels,
Errors = 255 * quant_error,
Loss = sum((255 * quant_error)^2))

return(out)
}

There are a few bits and pieces that deserve some additional remarks:

  1. The double for loop is where most of the dithering takes place. Since the algorithm can only handle all but the first and last column, and the last row, correctly, we need some additional code to also dither the image edges. This is where all successive code is takes care of.
  2. To find the intensities of the new pixels in the smaller palette, I use the function Closest. This is actually a faster implementation of FindClosestPaletteColor. Here is the code:
Closest <- function(pixel, col_palette = col_palette) {
ind <- which.min(rowSums(t(pixel - t(col_palette))^2))

newpixel <- as.numeric(col_palette[ind, ])

return(newpixel)
}

It’s a wrap! That was some outstanding haute cuisine, where we successfully implemented the Floyd-Steinberg algorithm to dither Quokka Quinten. A fairly convenient way to save the result is by running the following lines of code:

dithered_03 <- DitherImage(quokka, n_bit = 3)
writePNG(dithered_03$New_Colors, "0_img/quokka_dithered_03.png")

And look at him! Quiten looks so much better now, despite the fact that we have only used 8 (!) colors. In Part II we will dive a bit deeper into the statistical processes behind the algorithm, and we will try to come up with a sensible way to determine the optimal number of colors. Hope you enjoyed!

Dithered 3-bit Quokka Quinten (0.28MB)

[1] R.W. Floyd and L. Steinberg “An Adaptive algorithm for Spatial Grayscale”, Proceedings of the Society of Information Display, Vol. 17, №2, 1976, pp. 75–77.

Full source code: https://github.com/justinkraaijenbrink/imagedithering

--

--

justinkraaijenbrink
Analytics Vidhya

Statistics lover | Likes Python, loves R | Gets happy from peanut butter recipes.