Edge detection - PowerPoint PPT Presentation

1 / 52
About This Presentation
Title:

Edge detection

Description:

Edge detection Edge detection Typically, there are three steps to perform edge detection: Derivative filters Derivative filters What is the smallest possible ... – PowerPoint PPT presentation

Number of Views:190
Avg rating:3.0/5.0
Slides: 53
Provided by: cpscUalrE
Category:
Tags: detection | edge

less

Transcript and Presenter's Notes

Title: Edge detection


1
Edge detection
2
Edge detection
Edges are places in the image with strong
intensity contrast. Since edges often occur at
image locations representing object boundaries,
edge detection is extensively used in image
segmentation when we want to divide the image
into areas corresponding to different objects.
Representing an image by its edges has the
further advantage that the amount of data is
reduced significantly while retaining most of the
image information. Since edges consist of mainly
high frequencies, we can, in theory, detect edges
by applying a highpass frequency filter in the
Fourier domain or by convolving the image with
an appropriate kernel in the spatial domain. In
practice, edge detection is performed in the
spatial domain, because it is computationally
less expensive and often yields better results.
3
Typically, there are three steps to perform edge
detection
1.      Noise reduction, where we try to suppress
as much noise as possible, without smoothing away
the meaningful edges. 2.      Edge enhancement,
where we apply some kind of filter that responds
strongly at edges and weakly elsewhere, so that
the edges may be identified as local maxima in
the filters output . One suggestion is to use
some kind of high pass filter. 3. Edge
localization, where we decide which of the local
maxima output by the filter are meaningful edges
and which are caused by noise
4
Derivative filters
One image can be represented as a surface, with
height corresponding to gray level value.
Brightness function depends on two variables
co-ordinates in the image plane ( gray level
value z f ( x,y)) As averaging of the pixels
over a region is analogous to integration,
differentiation can be expected to have the
opposite effect and thus sharpen an image. Edges
are pixels where brightness function changes
abruptly. We can describe changes of continuous
functions using derivatives. Brightness function
depends on two variables co-ordinates in the
image plane and so operators describing edges are
expressed using partial derivatives.
5
Derivative filters What is the smallest possible
window we can choose?
The difference of the gray values of the two
pixels is an estimate of the first derivative of
the intensity function (image brightness
function) with respect to the spatial variable
along the direction of with we take the
difference. This is because first derivatives are
approximated by first difference in the discrete
case.
(1)
Calculating gx at each pixel position is
equivalent to convolving the image with a mask
(filter) of the form
in the x direction , and calculating gy is
equivalent to convolving the image with a filter
In the y direction.
6
Identifying the local maxima as candidate edge
pixels
We also can express gradient calculation as a
pair of convolution
(2)
In the first output, produced by convolution with
mask hx, any pixel that has an absolute value
larger than values of its left and right
neighbors is a candidate pixel to be a vertical
edge pixel. In the second output , produced by
the convolution with mask hy, any pixel that has
an absolute value larger than the values of its
top and bottom neighbors is a candidate pixel to
be a horizontal edge pixel. The process of
identifying the local maxima as candidate edge
pixels is called non-maxima suppression.
7
The Prewitt operator
Consider the arrangement of the pixels 3 x 3
about the pixel (x,y)
(3)
where the kernels are
Clearly , the kernel hx is sensitive to changes
in the x direction, edges that run vertically or
have a vertical component. Similarly, the kernel
hy is sensitive to changes in the direction y ,
edges that run horizontally or have a horizontal
component.
8
The derivatives of the image
Figure 1 1st and 2nd derivative of an edge
illustrated in one dimension.
9
Calculating the derivatives of the image
 We can see that the position of the edge can be
estimated with the maximum of the 1st derivative
or with the zero-crossing of the 2nd derivative.
Therefore we want to find a technique to
calculate the derivative of a two-dimensional
image. For a discrete one-dimensional function
f(i), the first derivative can be approximated by
Calculating this formula is equivalent to
convolving the function with -1 1. Similarly
the 2nd derivative can be estimated by convolving
f(i) with 1 -2 1.
10
There are two common approaches to estimate the
1st derivative in a two-dimensional image,
Prewitt compass edge detection and gradient edge
detection.
11
Gradient edge detection
Gradient edge detection is the second and more
widely used technique. Here, the image is
convolved with only two kernels, one estimating
the gradient in the x-direction, gx, the other
the gradient in the y-direction, gy.
A changes of the image function are based on the
estimation of gray level gradient at a pixel.
The gradient is the two-dimensional equivalent of
the first derivative and is defined as the
gradient vector.
(4)
The two gradients ( in x and in y direction)
computed at each pixel by Equation (1) or Eq. (
3) can be regarded as the x and y components of
a gradient vector ( 4)
12
The magnitude of the gray level gradient
There are two important properties associated
with the gradient gradient magnitude and
direction of the gradient
The outputs of the two convolution are squared,
added and square rooted to produce the gradient
magnitude
(5)
The magnitude of the gradient vector is the
length of the hypotenuse of the right triangle
having sides gx an gy, and this reflects the
strength of the edge, or edge response, at any
given .
It is common practice, to approximate the
gradient magnitude by absolute values
(6)
13
The direction of the gray level gradient
From vector analysis, the direction of the
gradient
(7)
This vector is oriented along the direction of
the maximum rate of increase of the gray level
function f (x,y) .
14
In the localization step , we must to identify
the meaningful edges
Whichever operator is used to compute the
gradient, the resulting vector contains
information about how strong the edge is at that
pixel and what its direction is. The edge
magnitude is a real number. Any pixel having a
gradient that exceeds a specified threshold value
is said to be an edge pixel, and others are not.
(8)
An alternative technique is to look for local
maxima in the gradient image, thus producing one
pixel wide edges. A more sophisticated technique
is used by the Canny edge detector. It first
applies a gradient edge detector to the image and
then finds the edge pixels using non-maximal
suppression and hysteresis tracking.
15
The most common kernels used for the gradient
edge detector are the Sobel, Roberts Cross and
Prewitt operators.
16
Sobel Edge Detector
The Sobel operator performs a 2-D spatial
gradient measurement on an image and so
emphasizes regions of high spatial frequency that
correspond to edges. Typically it is used to find
the approximate absolute gradient magnitude at
each point in an input grayscale image.
In theory at least, the operator consists of a
pair of 33 convolution kernels as shown in
Figure . One kernel is simply the other rotated
by 90
This operator place an emphasis on pixels that
are closer to the center of the mask. The Sobel
operator is one of most commonly used edge
detectors.
17
Sobel Edge Detector
Often, this absolute magnitude is the only output
the user sees --- the two components of the
gradient are conveniently computed and added in a
single pass over the input image using the
pseudo-convolution operator shown in Figure .
(9)
18
Sobel Edge Detector - Guidelines for Use
The Sobel operator is slower to compute than the
Roberts Cross operator, but its larger
convolution kernel smoothes the input image to a
greater extent and so makes the operator less
sensitive to noise. The operator also generally
produces considerably higher output values for
similar edges, compared with the Roberts Cross.
19
Compare the results of applying the Sobel
operatorwith the equivalent Roberts Cross output
Sobel operator
Roberts Cross output

20
Applying the Sobel operator- Examples
All edges in the image have been detected and can
be nicely separated from the background using a
threshold of 150, as can be seen in the Figure

21
Although the Sobel operator is not as sensitive
to noise as the Roberts Cross operator, it still
amplifies high frequencies. The follow image is
the result of adding Gaussian noise with a
standard deviation of 15 to the original image.
Applying the Sobel operator yields
and thresholding the result at a value of 150
produces
We can see that the noise has increased during
the edge detection.
22
There are three problems with thresholding of
gradient magnitude approach
  • First, the boundaries between scene elements are
    not always sharp. ( See Figure 7.20 pp 167 from
    the text book)
  • The noise in an image can sometimes produce
    gradients as high as , or even higher than, those
    resulting from meaningful edges. ( See Figure
    7.21 pp 168 from the text book)
  • With a simple threshold approach is that the
    local maximum in gray level gradient associated
    with an edge lies at the summit of a ridge.
    Thresholding detects a portion of this ridge,
    rather than the single point of maximum gradient.
    The ridge can be rather broad in the case of
    diffuse edges, resulting in a thick band of
    pixels in the edge map. ( See Figure 7.19 pp 166
    from the text book)

23
How we can choose the weights of a 3 x 3 mask for
edge detection ?
If we are going to use one such mask to calculate
gx and another to calculate gy . Such masks
must obey the following conditions
1.      The mask with calculates gx must be
produced from the mask that calculates gy by
rotation by 90. 2.      We do not want to give
any extra weight to the left or the right
neighbors of the central pixel, so we must have
identical weights in the left and right columns.
24
How we can choose the weights of a 3 x 3 mask for
edge detection ?
Let us say that we want to subtract the signal
in front of the central pixel from the signal
behind it, in order to find local differences ,
and we want these two subtracted signals to have
equal weights.
If the image is absolutely smooth we want to
have zero response. So the sum of all the weights
must be zero. Therefore, A22 -2A21
25
How we can choose the weights of a 3 x 3 mask for
edge detection ?
      In the case of a smooth signal, and as we
differentiate in the direction of columns, we
expect each column to produce 0 output. Therefore
, A210
We can divide these weighs throughout by A11 so
that finally, this mask depends only on one
parameter
If we choose K2 we have the Sobel masks for
differentiating an image along two directions
26
Prewitt compass edge detection( dont be confuse
with Prewitt operator for differential gradient
edge detection )
Compass Edge Detection is an alternative approach
to the differential gradient edge detection (see
the Roberts Cross and Sobel operators). The
operation usually outputs two images, one
estimating the local edge gradient magnitude and
one estimating the edge orientation of the input
image.
When using compass edge detection the image is
convolved with a set of (in general 8)
convolution kernels, each of which is sensitive
to edges in a different orientation. For each
pixel the local edge gradient magnitude is
estimated with the maximum response of all 8
kernels at this pixel location
(10)
where Gi is the response of the kernel i at the
particular pixel position and n is the number of
convolution kernels. The local edge orientation
is estimated with the orientation of the kernel
that yields the maximum response.
27
Prewitt compass edge detection
Various kernels can be used for this operation
for the following discussion we will use the
Prewitt kernel. Two templates out of the set of 8
are shown in Figure
Figure Prewitt compass edge detecting templates
sensitive to edges at 0 and 45.
28
Prewitt compass edge detection
The whole set of 8 kernels is produced by taking
one of the kernels and rotating its coefficients
circularly. Each of the resulting kernels is
sensitive to an edge orientation ranging from 0
to 315 in steps of 45, where 0 corresponds to
a vertical edge.
The maximum response G for each pixel is the
value of the corresponding pixel in the output
magnitude image. The values for the output
orientation image lie between 1 and 8, depending
on which of the 8 kernels produced the maximum
response. This edge detection method is also
called edge template matching, because a set of
edge templates is matched to the image, each
representing an edge in a certain orientation.
The edge magnitude and orientation of a pixel is
then determined by the template that matches the
local area of the pixel the best.
29
Compass edge detection
The compass edge detector is an appropriate way
to estimate the magnitude and orientation of an
edge. Although differential gradient edge
detection needs a rather time-consuming
calculation to estimate the orientation from the
magnitudes in the x- and y-directions, the
compass edge detection obtains the orientation
directly from the kernel with the maximum
response. The compass operator is limited to
(here) 8 possible orientations however
experience shows that most direct orientation
estimates are not much more accurate. On the
other hand, the compass operator needs (here) 8
convolutions for each pixel, whereas the gradient
operator needs only 2, one kernel being sensitive
to edges in the vertical direction and one to the
horizontal direction.
30
The result for the edge magnitude image is very
similar with both methods to estimate the 1st
derivative in a two-dimensional image
If we apply the Prewitt Compass Operator to the
input image we get two output images
The image shows the local edge magnitude for each
pixel. We can't see much in this image, because
the response of the Prewitt kernel is too small.
Applying histogram equalization to this image
yields The result is similar to which was
processed with the Sobel differential gradient
edge detector and histogram equalized.
31
If we apply the Prewitt Compass Operator to the
input image we get two output images
First image is the graylevel orientation image
that was contrast-stretched for a better display.
That means that the image contains 8 graylevel
values between 0 and 255, each of them
corresponding to an edge orientation. The
orientation image as a color labeled image
(containing 8 colors, each corresponding to one
edge orientation) is shown in the second image
32
Compass edge detection - Examples
Another image suitable for edge detection is
The corresponding output for the magnitude and
orientation, respectively of the compass edge
detector is
This image contains little noise and most of the
resulting edges correspond to boundaries of
objects. Again, we can see that most of the
roughly vertical books were assigned the same
orientation label, although the orientation
varies by some amount.
33
The Laplacian Operator
The Laplacian is a 2-D isotropic measure of the
2nd spatial derivative of an image. The Laplacian
of an image highlights regions of rapid intensity
change and is therefore often used for edge
detection . These second order derivatives can be
used for edge localization. The Laplacian is
often applied to an image that has first been
smoothed with something approximating a Gaussian
smoothing filter in order to reduce its
sensitivity to noise, and hence the two variants
will be described together here. The operator
normally takes a single graylevel image as input
and produces another graylevel image as output.
The Laplacian operator combines the second order
derivatives as follows
(11)
34
The Laplacian Operator
Since the input image is represented as a set of
discrete pixels, we have to find a discrete
convolution kernel that can approximate the
second derivatives in the definition of the
Laplacian. Three commonly used small kernels are
shown in Figure
Because these kernels are approximating a second
derivative measurement on the image, they are
very sensitive to noise. To counter this, the
image is often Gaussian smoothed before applying
the Laplacian filter. This pre-processing step
reduces the high frequency noise components prior
to the differentiation step.
35
Laplacian of Gaussian
1.      The smoothing filer is a
Gaussian. 2.      The enhancement step is the
second derivative ( Laplacian in two
dimensions) 3.      The detection criterion is
the presence of a zero crossing in the second
derivative with a corresponding large peak in the
first derivative.
In this approach, an image should first be
convolved with Gaussian filter
(12)
The order of performing differentiation and
convolution can be interchanged because of the
linearity of the operators involved
(13)
36
Laplacian of Gaussian
The 2-D LoG function centered on zero and with
Gaussian standard deviation has the form
(14)
Figure 2 The 2-D Laplacian of Gaussian (LoG)
function. The x and y axes are marked in standard
deviations .
37
Laplacian of Gaussian - Kernel
A discrete kernel that approximates this function
(for a Gaussian ? 1.4) is shown in Figure 3
Note that as the Gaussian is made increasingly
narrow, the LoG kernel becomes the same as the
simple Laplacian kernels shown in Figure 2. This
is because smoothing with a very narrow Gaussian
( lt 0.5 pixels) on a discrete grid has no
effect. Hence on a discrete grid, the simple
Laplacian can be seen as a limiting case of the
LoG for narrow Gaussians.
38
Laplacian of Gaussian _ Guidelines for Use
  • The LoG operator calculates the second spatial
    derivative of an image. This means that in areas
    where the image has a constant intensity (i.e.
    where the intensity gradient is zero), the LoG
    response will be zero. In the vicinity of a
    change in intensity, however, the LoG response
    will be positive on the darker side, and negative
    on the lighter side. This means that at a
    reasonably sharp edge between two regions of
    uniform but different intensities, the LoG
    response will be
  • zero at a long distance from the edge,
  • positive just to one side of the edge,
  • negative just to the other side of the edge,
  • zero at some point in between, on the edge
    itself.

39
Laplacian of Gaussian _ Guidelines for Use
Figure 4 Response of 1-D LoG filter to a step
edge. The left hand graph shows a 1-D image, 200
pixels long, containing a step edge. The right
hand graph shows the response of a 1-D LoG filter
with Gaussian 3 pixels.
40
Laplacian of Gaussian - Example
The image is the effect of applying an LoG filter
with Gaussian 1.0, again using a 77 kernel.
41
Laplacian of Gaussian (zero crossing detector)
The starting point for the zero crossing detector
is an image which has been filtered using the
Laplacian of Gaussian filter. However, zero
crossings also occur at any place where the image
intensity gradient starts increasing or starts
decreasing, and this may happen at places that
are not obviously edges. Often zero crossings are
found in regions of very low gradient where the
intensity gradient wobbles up and down around
zero.
Once the image has been LoG filtered, it only
remains to detect the zero crossings. This can be
done in several ways. The simplest is to simply
threshold the LoG output at zero, to produce a
binary image where the boundaries between
foreground and background regions represent the
locations of zero crossing points. These
boundaries can then be easily detected and marked
in single pass, e.g. using some morphological
operator. For instance, to locate all boundary
points, we simply have to mark each foreground
point that has at least one background neighbor.
42
Laplacian of Gaussian (zero crossing detector)
The problem with this technique is that will tend
to bias the location of the zero crossing edge to
either the light side of the edge, or the dark
side of the edge, depending on whether it is
decided to look for the edges of foreground
regions or for the edges of background regions.
A better technique is to consider points on both
sides of the threshold boundary, and choose the
one with the lowest absolute magnitude of the
Laplacian, which will hopefully be closest to the
zero crossing. Since the zero crossings
generally fall in between two pixels in the LoG
filtered image, an alternative output
representation is an image grid which is
spatially shifted half a pixel across and half a
pixel down, relative to the original image. Such
a representation is known as a dual lattice.
This does not actually localize the zero crossing
any more accurately, of course. A more accurate
approach is to perform some kind of interpolation
to estimate the position of the zero crossing to
sub-pixel precision.
43
Laplacian of Gaussian - Applications
The behavior of the LoG zero crossing edge
detector is largely governed by the standard
deviation of the Gaussian used in the LoG filter.
The higher this value is set, the more smaller
features will be smoothed out of existence, and
hence fewer zero crossings will be produced.
Hence, this parameter can be set to remove
unwanted detail or noise as desired. The idea
that at different smoothing levels different
sized features become prominent is referred to as
scale'.
The image is the result of applying a LoG filter
with Gaussian standard deviation 1.0.
This image contains detail at a number of
different scales.
44
Laplacian of Gaussian - Applications
Note that in this and in the following LoG output
images, the true output contains negative pixel
values. For display purposes the graylevels have
been offset so that displayed graylevel 128
corresponds to an actual value of zero, and
rescaled to make the image variation clearer.
Since we are only interested in zero crossings
this rescaling is unimportant.
The image shows the zero crossings from this
image. Note the large number of minor features
detected, which are mostly due to noise or very
faint detail. This smoothing corresponds to a
fine scale'.
45
Marr (1982) has suggested that human visual
systems use zero crossing detectors based on LoG
filters at several different scales (Gaussian
widths).
46
Difference of Gaussians ( DoG) Operator
It is possible to approximate the LoG filter with
a filter that is just the difference of two
differently sized Gaussians. Such a filter is
known as a DoG filter (short for Difference of
Gaussians').
A) For retinal ganglion cells and LGN neurons,
the RF has a roughly circular , center-surround
organisation. Two configurations are observed
one in which the RF center is responsive to
bright stimuli ( ON-center) and the surround is
responsive to dark stimuli, and the other
(OFF-center) in which the respective polarities
are reversed.
47
Difference of Gaussians ( DoG) Operator
The common feature of the two types of receptive
fields is that the centre and surround regions
are antagonistic. The DoG operator uses simple
linear differences to model this centre-surround
mechanism and the response R(x,t) is analogous to
a measure of contrast of events happening between
centre and surround regions of the operator. (see
Figure 5)
48
Difference of Gaussians ( DoG) Operator
The difference of Gaussian ( DoG) model is
composed of the difference of two response
functions that model the centre and surround
mechanisms of retinal cells. Mathematically, the
DoG operator is defined as
(15)
where G is a two-dimensional Gaussian operator at
x
Parameters ?c and ?s are standard deviations for
centre and surround Gaussian functions
respectively. This Gaussian functions are
weighted by integrated sensitivities ?c (for
centre) and ?s (for the surround). The response
R(x,t), of the DoG filter to an input signal
s(x,t) at x during time t is given by
(16)
49
Extending the DoG operator to detect changes in a
sequence of images
Scale S represents a pair of values for ?c and ?s
Nc(x,t) Nc(x,t-1)gt?1 (for on center events)
Time t
Time t1
On events are extracted in parallel in several
multiscale maps in temporal window of size T2
Off events are extracted in parallel in several
multiscale maps in temporal window of size T2
Normalization
Evaluative feedback

Multi-agent reinforcement system
Competition among on maps
Competition among off maps
Location and scale of the most significant off
events
Location and scale of the most significant on
events
50
Parametric edge models
Parametric models are based on the idea that the
discrete image intensity function can be
considered a sampled and noisy approximation of
the underlying continuous or piecewise continuous
image intensity function . Therefore we try to
model the image as a simple piecewise analytical
function. Now the task becomes the reconstruction
of the individual piecewise function. We try to
find simple functions which best approximate the
intensity values only in the local neighborhood
of each pixel. This approximation is called the
facet model ( Haralick and Shapiro,92) These
functions, and not the pixel values , are used to
locate edges in the image . To provide an edge
detection example, consider a bi-cubic polynomial
facet model
51
Parametric edge models
The first derivative in the direction ? is given
by
The second directional derivative in the
direction ? is given by
We are considering points only on the line in the
direction ?, x0 ?cos ? and y0 ?sin?
52
Parametric edge models
  • There are many possibilities
  • A0, f2Bp C, f 2B, pl-C/B, ps0
  • if Bgt0 and fgt0 valley
  • if Blt0 and flt 0 ridge
  • if B0 and f 0 plane
Write a Comment
User Comments (0)
About PowerShow.com