Unlike other images in this website, the images in this section are
carefully chosen and constructed to illustrate the theory - elsewhere in
the website the images are original photographs.

Every image held in a computer potentially has a Fourier transform (technically
a 2D DFT, a two-dimensinal discrete Fourier transform), which is really
just another way of seeing the image.

Fig 1: Woods on Exmoor. | Fig 2: The amplitude of the FFT of Fig 1. |

Unshake compensates for this effect, in which the low-spatial-frequency
components have higher amplitudes that those with a higher spatial frequency
- it amplifies the higher spatial frequencies away from the centre, and
smooths the result in a particular way, to give this picture;

Fig 3: Compensated amplitude of Fig 1. |

One of the special things about Fourier transforms is that using a special trick, the Cooley-Tukey Fast Fourier Transform (or FFT), we can calculate the Fourier transform (Fig 2) of an image (Fig 1) reasonably quickly, and most importantly, given both the amplitude and phase of the Fourier transform of an image, we can just as quickly find the original image from its Fourier transform. It seems that the Fourier transform is just as real a way of seeing the world as the original image is.

We say that the original image is in "configuration" space, or c-space,
and the Fourier transform is in "k-space". It just happens that some things
are more obvious in c-space and some are more obvious in k-space. If one
object is next to another, that is visible in c-space, but very obscure
in k-space. On the other hand, blurring of photos is more clear in k-space.

Fig 4: Fig 1, convolved with horizontal bar 5 pixels long | Fig 5: Compensated amplitude of Fourier transform of Fig 4. |

This is an important result: Blurring the image sideways makes the compensated
Fourier transform far fainter at the sides. Also, there are faint bars
left on either side of the central bar (we call these "side-lobes"). You
can also see fainter bars extending to the edges on either side. Compare
this with the amplitude of the Fourier transform of a horizontal line,
9 pixels long, Fig 6;

Fig 6: Amplitude of Fourier transform of horizontal bar 5 pixels long. |

There are two important phenomena happening here: First, large scale structures in Fourier transforms represent small scale structures in the image, and vice versa. Thus, after we have smoothed out the amplitude of the Fourier transform, what remains is information about the image's small-scale structure, which is mostly its blur.

Secondly, when we blurred the image, every point in it was replaced by a horizontal bar, and all of the resulting bars were added together. That is what is meant by blurring an entire image in a particular way. The so-called "convolution therorem" tells us that if every point of an image is replaced by another shape (such as a bar as in shake, or a circle as in poor focus), then the Fourier transform of the result is proportional to the result of multiplying the Fourier transform of the image by that of the blurring shape. So every pixel in Fig 5 should be (proportional to) the product of the corresponding pixels of Fig 3 and Fig 6.

Now this raises a fascinating possibility: if we can get (by the convolution theorem) Fig 4 by multiplying its Fourier transform by Fig 6, can we reverse the process, and get Fig 1, the original, from Fig 4, by dividing it by Fig 6? The answer is a qualified yes, and this is the key to deconvolution. It has fascinated me since I first learned about Fourier transforms as an undergraduate, but it took 20 years before I got round to playing with it.

There is a catch; some of the values in Fig 6 are so small that they
might as well be zero, and if you divide by (nearly) zero, you run
into trouble. (Technically, noise becomes amplified greatly.) So we use
a trick known as a "Wiener filter" to avoid dividing by very small numbers.
After that, the result of dividing the Fourier transform of Fig 4 by Fig
6, and reversing the transform, is Fig 7;

Fig 7: Deconvolution of Fig 4. |

But what if we do not have a "guide star", which will tell us how our image has been blurred? How can we deduce the blur so that we can undo it? This is the difficult art of "blind deconvolution".

One way of approaching the problem is simply to look at Fig 5. Someone who has studied Fourier transforms would immediately recognise the characteristic signs of sideways shake, as a "sinc" function, and could guess how to deconvolve the image, and one could start to program such a process - this is in fact how Unshake started out. But I eventually realised that it was a terribly difficult path, if the real scene was not as ideal as the example which I have chosen here, and if the blur was not as simple, such as a curved shake combined with poor focus. A much more adaptable method exists, commonly known as iterative blind deconvolution, but I don't think that this is suitable for deconvolving a normal photo on a home computer.

Briefly, Unshake takes Fig 3 as a guess as to the amplitude of the Fourier transform of the blur, and the phase is recovered as that which has the smallest effect, a technique which can be shown to work well for simple blurs, and which makes errors as small as possible when it does fail.

For the case we have been looking at, this is what version 1.4 of Unshake
makes of Fig 4;

Fig 8: Unshake's default effort. | Fig 9: Unshake's thorough effort. |

©M.D. Cahill 2003-2005 C.E.