Title: Image Enhancement and Filtering Techniques
1Image Enhancement and Filtering Techniques
- EE4H, M.Sc 0407191
- Computer Vision
- Dr. Mike Spann
- m.spann_at_bham.ac.uk
- http//www.eee.bham.ac.uk/spannm
2Introduction
- Images may suffer from the following
degradations - Poor contrast due to poor illumination or finite
sensitivity of the imaging device - Electronic sensor noise or atmospheric
disturbances leading to broad band noise - Aliasing effects due to inadequate sampling
- Finite aperture effects or motion leading to
spatial
3Introduction
- We will consider simple algorithms for image
enhancement based on lookup tables - Contrast enhancement
- We will also consider simple linear filtering
algorithms - Noise removal
4Histogram equalisation
- In an image of low contrast, the image has grey
levels concentrated in a narrow band - Define the grey level histogram of an image h(i)
where - h(i)number of pixels with grey level i
- For a low contrast image, the histogram will be
concentrated in a narrow band - The full greylevel dynamic range is not used
5Histogram equalisation
6Histogram equalisation
- Can use a sigmoid lookup to map input to output
grey levels - A sigmoid function g(i) controls the mapping from
input to output pixel - Can easily be implemented in hardware for maximum
efficiency
7Histogram equalisation
8Histogram equalisation
- ? controls the position of maximum slope
- ? controls the slope
- Problem - we need to determine the optimum
sigmoid parameters and for each image - A better method would be to determine the best
mapping function from the image data
9Histogram equalisation
- A general histogram stretching algorithm is
defined in terms of a transormation g(i) - We require a transformation g(i) such that from
any histogram h(i)
10Histogram equalisation
- Constraints (N x N x 8 bit image)
- No crossover in grey levels after
transformation
11Histogram equalisation
- An adaptive histogram equalisation algorithm can
be defined in terms of the cumulative histogram
H(i)
12Histogram equalisation
- Since the required h(i) is flat, the required
H(i) is a ramp
h(i)
H(i)
13Histogram equalisation
- Let the actual histogram and cumulative histogram
be h(i) and H(i) - Let the desired histogram and desired cumulative
histogram be h(i) and H(i) - Let the transformation be g(i)
14Histogram equalisation
- Since g(i) is an ordered transformation
15Histogram equalisation
- Worked example, 32 x 32 bit image with grey
levels quantised to 3 bits
16Histogram equalisation
17Histogram equalisation
18Histogram equalisation
19Histogram equalisation
20Histogram equalisation
- ImageJ demonstration
- http//rsb.info.nih.gov/ij/signed-applet
21Image Filtering
- Simple image operators can be classified as
'pointwise' or 'neighbourhood' (filtering)
operators - Histogram equalisation is a pointwise operation
- More general filtering operations use
neighbourhoods of pixels
22Image Filtering
23Image Filtering
- The output g(x,y) can be a linear or non-linear
function of the set of input pixel grey levels
f(x-M,y-M)f(xM,yM.
24Image Filtering
25Linear filtering and convolution
- Example
- 3x3 arithmetic mean of an input image (ignoring
floating point byte rounding)
26Linear filtering and convolution
- Convolution involves overlap multiply add
with convolution mask
27Linear filtering and convolution
28Linear filtering and convolution
- We can define the convolution operator
mathematically - Defines a 2D convolution of an image f(x,y) with
a filter h(x,y)
29Linear filtering and convolution
- Example convolution with a Gaussian filter
kernel - s determines the width of the filter and hence
the amount of smoothing
30Linear filtering and convolution
s
31Linear filtering and convolution
Noisy
Original
Filtered s3.0
Filtered s1.5
32Linear filtering and convolution
- ImageJ demonstration
- http//rsb.info.nih.gov/ij/signed-applet
33Linear filtering and convolution
- We can also convolution to be a frequency domain
operation - Based on the discrete Fourier transform F(u,v) of
the image f(x,y)
34Linear filtering and convolution
- The inverse DFT is defined by
35Linear filtering and convolution
36Linear filtering and convolution
37Linear filtering and convolution
- F(u,v) is the frequency content of the image at
spatial frequency position (u,v) - Smooth regions of the image contribute low
frequency components to F(u,v) - Abrupt transitions in grey level (lines and
edges) contribute high frequency components to
F(u,v)
38Linear filtering and convolution
- We can compute the DFT directly using the formula
- An N point DFT would require N2 floating point
multiplications per output point - Since there are N2 output points , the
computational complexity of the DFT is N4 - N44x109 for N256
- Bad news! Many hours on a workstation
39Linear filtering and convolution
- The FFT algorithm was developed in the 60s for
seismic exploration - Reduced the DFT complexity to 2N2log2N
- 2N2log2N106 for N256
- A few seconds on a workstation
40Linear filtering and convolution
- The filtering interpretation of convolution can
be understood in terms of the convolution theorem - The convolution of an image f(x,y) with a filter
h(x,y) is defined as
41Linear filtering and convolution
42Linear filtering and convolution
- Note that the filter mask is shifted and inverted
prior to the overlap multiply and add stage of
the convolution - Define the DFTs of f(x,y),h(x,y), and g(x,y) as
F(u,v),H(u,v) and G(u,v) - The convolution theorem states simply that
43Linear filtering and convolution
- As an example, suppose h(x,y) corresponds to a
linear filter with frequency response defined as
follows - Removes low frequency components of the image
44Linear filtering and convolution
DFT
IDFT
45Linear filtering and convolution
- Frequency domain implementation of convolution
- Image f(x,y) N x N pixels
- Filter h(x,y) M x M filter mask points
- Usually MltltN
- In this case the filter mask is 'zero-padded'
out to N x N - The output image g(x,y) is of size NM-1 x NM-1
pixels. The filter mask wraps around truncating
g(x,y) to an N x N image -
46Linear filtering and convolution
47Linear filtering and convolution
48Linear filtering and convolution
- We can evaluate the computational complexity of
implementing convolution in the spatial and
spatial frequency domains - N x N image is to be convolved with an M x M
filter - Spatial domain convolution requires M 2 floating
point multiplications per output point or N 2 M 2
in total - Frequency domain implementation requires 3x(2N 2
log 2 N) N 2 floating point multiplications (
2 DFTs 1 IDFT N 2 multiplications of the
DFTs)
49Linear filtering and convolution
- Example 1, N512, M7
- Spatial domain implementation requires 1.3 x 107
floating point multiplications - Frequency domain implementation requires 1.4 x
107 floating point multiplications - Example 2, N512, M32
- Spatial domain implementation requires 2.7 x 108
floating point multiplications - Frequency domain implementation requires 1.4 x
107 floating point multiplications
50Linear filtering and convolution
- For smaller mask sizes, spatial and frequency
domain implementations have about the same
computational complexity - However, we can speed up frequency domain
interpretations by tessellating the image into
sub-blocks and filtering these independently - Not quite that simple we need to overlap the
filtered sub-blocks to remove blocking artefacts - Overlap and add algorithm
51Linear filtering and convolution
- We can look at some examples of linear filters
commonly used in image processing and their
frequency responses - In particular we will look at a smoothing filter
and a filter to perform edge detection
52Linear filtering and convolution
- Smoothing (low pass) filter
- Simple arithmetic averaging
- Useful for smoothing images corrupted by additive
broad band noise
53Linear filtering and convolution
Spatial domain
Spatial frequency domain
54Linear filtering and convolution
- Edge detection filter
- Simple differencing filter used for enhancing
edged - Has a bandpass frequency response
55Linear filtering and convolution
- ImageJ demonstration
- http//rsb.info.nih.gov/ij/signed-applet
56Linear filtering and convolution
57Linear filtering and convolution
- We can evaluate the (1D) frequency response of
the filter h(x)1,0,-1 from the DFT definition
58Linear filtering and convolution
- The magnitude of the response is therefore
- This has a bandpass characteristic
59Linear filtering and convolution
60Conclusion
- We have looked at basic (low level) image
processing operations - Enhancement
- Filtering
- These are usually important pre-processing steps
carried out in computer vision systems (often in
hardware)