HIGH LEVEL PROCESSING - PowerPoint PPT Presentation

1 / 34
About This Presentation
Title:

HIGH LEVEL PROCESSING

Description:

Is Pi regained ? STOP. Yes. No. Is new point inside boundary. No. Turn ... continued along the edges until the original point (starting point) is regained. ... – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 35
Provided by: ValuedGate69
Category:

less

Transcript and Presenter's Notes

Title: HIGH LEVEL PROCESSING


1
  • HIGH LEVEL PROCESSING
  • After you have improved the quality of an image,
    and identified object boundaries, you are ready
    to embark on your ultimate objective of object
    recognition.
  • This process is not a one step process. you still
    have to proceed to some more processing steps --
    known as high level processing -- before
    considering object identification.
  • One simple way to identify the shape of an object
    is to generate information about edges. Another
    way which can be very effective in some
    applications is to work with silhouettes.
  • This is not fundamentally different from an edge
    description - here an assumption is made that
    overall shape

2
  • HIGH LEVEL PROCESSING
  • characteristics - rather that fine detail or
    subtle shade variations - are of principal
    importance.
  • Typical characteristic could be surface area or
    perimeter length.
  • This method assumes that an object is defined by
    roughly uniform surface rather than
    object/background discontinuities at edges or
    other surface locations.
  • This type of segmentation can be easily
    accomplished by applying a thresholding
    procedure to the raw gray scale image.
  • This works well for simple images, which are
    characterized by shapes (uniform lighting) and no
    shading.

3
  • HIGH LEVEL PROCESSING
  • Contour Tracing ( for edge information inside and
    on the boundary)
  • There may be situations where there is a need to
    generate edge information (not just boundary),
    which can be processed by computer (edge
    description - length, direction, shape, contour
    etc)- sometimes holes in an image.
  • We have to develop a procedure that will generate
    edge description from a silhouette or region
    based description.
  • The principle of this approach is to find any
    point on the edge of an object and then by
    utilizing local information, track around
    successive points on the edge (marking each point
    traversed until the starting point is reached.
  • ? ? ? ? ? ?
  • ? ?
  • ? ? ? ? ? ?
  • The procedure is simple.

4
p
5
Apply contour trace
Outline obtained from contour tracing algorithm
Silhouette of an Image
6
  • HIGH LEVEL PROCESSING
  • Contour Tracing

Start
Locate a point Pi the object boundary
Turn through 90 deg counter clockwise
Check New pixel in new direction
Turn through 90 deg clockwise
Yes
Is Pi regained ?
STOP
No
Is new point inside boundary
No
Yes
Mark it
7
  • HIGH LEVEL PROCESSING
  • Region Oriented Segmentation
  • The objective of the segmentation is to partition
    an image into regions based on different
    intensities.
  • The problem is approached by finding boundaries
    between regions based on intensity
    discontinuities.
  • We perform segmentation of an image into distinct
    regions (within each region, pixel points are
    assumed to be connected in some way).
  • Let R represent the entire region. The
    segmentation process partitions the region R into
    R1, R2, Rn, such that each region Ri is uniform
    ( connected) in terms on intensity values.
  • By this process, we are able to identify
    individual objects by merging the regions having
    similar intensity values.
  • The simple process can be described as
  • A first pass scans the gray scale, raw image and
    identifies primary regions by connecting together
    group of pixel- points of constant gray scale
    intensity values.

8
(No Transcript)
9
  • HIGH LEVEL PROCESSING
  • Region Segmentation
  • A second pass through the image now works with
    these primary regions and examines boundaries
    between these regions already identified.
  • Depending on differences in intensity values
    between primary regions, these regions may be
    merged to give a wider ranging regions or may be
    left as isolated pixel groups. ( see figure next
    page )
  • In this case the segmentation had identified what
    might easily be interpreted as two objects - a
    large object with a smaller object by its side.

10
(No Transcript)
11
A practical image using region segmentation
approach is shown below
12
  • HIGH LEVEL PROCESSING
  • Object Representation and Coding
  • Since an object of interest in an NXN image may
    occupy small number of pixels - ( may be
    typically 32x32 or 64x64 pixels) it would be
    really wasteful to process the entire image.
  • As successive manipulations on an image require
    time consuming and complex computations.
    Therefore, it is useful to consider more
    appropriate means of encoding the image data.
  • There exist a very useful technique for object
    representation - the chain coding - it gives the
    object of interest.
  • Chain encoding employs the principle of finding
    an arbitrary initial point on the image boundary.
    This point is given spatial coordinates. This
    starting point is useful for subsequent image
    encoding.

13
  • HIGH LEVEL PROCESSING
  • Object Representation and Coding
  • Subsequently, only the directional changes are
    recorded and stored. This procedure is continued
    along the edges until the original point
    (starting point) is regained.
  • To record directional changes following
    directions are defined.

14
  • HIGH LEVEL PROCESSING
  • Object Representation and Coding
  • See figure on the next transparency - with
    starting point.
  • The chain code is defined as as follows
  • 1 1 0 0 0 0 6 0 6 6 6 4 6 4 4 4 4 3 3 2 ( derived
    from the table on the next page).
  • It is very clear that such coding scheme can
    significantly reduce the storage requirements for
    many images of practical interest.
  • The chain code technique has an advantage beyond
    these storage considerations. For example, it may
    be necessary to obtain measurements of
    identifying features that describe some physical
    characteristics of an object.
  • For example we may wish to measure, the area of
    an object or perimeter. This chain coding allows
    computation of some basic physical properties
    directly without the necessity to reconvert it
    into NXN pixel format.

15
(No Transcript)
16
(No Transcript)
17
  • HIGH LEVEL PROCESSING
  • Object Representation and Coding
  • Some of the physical parameters are defined as
    perimeter, length, width and area.
  • Perimeter length of codes 0,2,4,6 is 1 and that
    of 1,3,5,7 is ?2.
  • The total perimeter of an object P SE ?2SO
    16 ?2(4) 16 5.66 21.66 units.
  • Area Measurement object area is obtained by
    direct computation from the chain code,
    evaluating the elemental contributions of several
    vertical strips bounded by the current
    directional transition path between two
    successive edge pixels and the x-axis.
  • Increments are additive, subtractive or of zero
    contributions according to following rules.

18
Neutral

Additive
Subtractive
19
(No Transcript)
20
  • HIGH LEVEL PROCESSING
  • Object Representation and Coding

2
1
3
0
4
5
7
6
21
  • HIGH LEVEL PROCESSING
  • Object Representation and Coding
  • A step in 0 direction adds 1 y units of area.
  • A step in 5 direction would be subtractive
    1(y-0.5).
  • A step in direction 2, which follows a vertical
    path, contributes a zero component to the area
    calculation.
  • Applying these principles to the complete set of
    possible directions generates the table of
    contributions.
  • The total area of the figure shown will be
  • A 5.5 6.5 7 7 7 7 0 6 0 0 0 - 3
    0 - 2 -2 - 2 - 2 - 2.5 - 3.5 0 29 square
    units.
  • This can be checked by the reference diagram.

22
(No Transcript)
23
  • HIGH LEVEL PROCESSING
  • Object Width
  • The maximum width of an object is measured across
    the horizontal line, and can be obtained by
    similar principle, by counting incremental steps
    in x direction (ve or ve only)
  • For the given object - width is positive and
    is 7 pixel length.
  • Similarly vertical shift or height can also be
    determined.
  • Differential Chain code
  • It is generated using the chain code by taking
    the difference in the direction ( counter
    clockwise ) of two subsequent chain codes. This
    is useful in getting rid of the rotational
    effects in an image.
  • Example get the differential chain code for the
    image
  • 1 0 3 3 2 1 0 Verify the DCC as 1 3 3
    0 3 3 3 .

24
(No Transcript)
25
  • HIGH LEVEL PROCESSING
  • Noise Reduction Algorithm

26
start
Get Pth pixel
Let N1
yes
Is value of P 0 ?
No
No
Value of Nth Neighbor 0
i
N N1
yes
no
Is N 8 ?
yes
Change P 0
27
  • Feature Selection For Object Recognition
  • For the recognition of an object, you would
    require certain features from the image under
    test and try to match them with the reference
    image features.
  • These features must be simple and not very
    computational intensive.
  • Following are a few of the possible features.
    This list of features is by no means complete.
    These are a few of the common features.
  • Texture Based Features
  • A high pass filter shows high frequency
    components. A lot of high frequency components
    implies a course texture. If the image does not
    contain high frequency, then the texture is
    smooth.
  • Fourier transform operation can also be used ,
    which directly

28
  • gives high frequency components.
  • Sometimes it is possible to look at the image
    brightness variations and get an idea about high
    frequency components.
  • If there are high frequency components, image
    contains brightness variations.
  • Brightness Value and Color Features
  • For the brightness value in an image, the
    histogram is a good place to start. Histogram
    shows brightness distribution of an image.
  • For color objects, RGB component pixel values of
    an object can be converted to HSI color space.
    Then, examination of the histogram of the hue
    component of the image will show predominant
    color values of the object.
  • For color sorting based applications, this
    feature alone will be enough to classify an
    object correctly.

29
  • Statistical values of brightness of an object can
    also be very useful feature measures. The mean
    brightness with standard deviation can give good
    features for an object.
  • Mode brightness is the most common brightness
    feature used for object recognition. This feature
    consists of sum of all image pixel brightness.
    This corresponds to energy content of an object
    This feature is called zero-order spatial moment.
  • These features are very useful in object
    recognition problem. However, it should be noted
    that a single feature is never enough for a
    reliable object identification. As a rule of
    thumb, a combination of minimum of 2-3 features
    is suggested.
  • Shape Measure
  • Shape measurements reflect physical dimensional
    measures that can characterize the appearance of
    an object. As such the list of shape measures can
    be very long.

30
  • Any particular image analysis application will
    make use of a few of the shape features (
    descriptors).
  • Some of these measures are
  • Perimeter The number of pixels around the
    circumference of the object. You have to use the
    distance concept to measure the perimeter.
    Euclidean distance is normally used to calculate
    this feature.
  • Area Represents the pixel area of the interior
    of the object. The area is computed as the total
    number of pixels inside and including the object
    boundary. The result is a measure of the size of
    the object.
  • This measure normally excludes the holes inside
    the object boundary.
  • Area to Perimeter Ratio A ratio based on the
    area and perimeter2 . It is is an excellent
    measure. The result is a measure of object
    roundness or compactness.

31
  • This value is between 0 --1. Greater the ratio,
    rounder the object.
  • R 4 ? x area / perimeter 2
  • Major Axis The (x,y) end points of the longest
    line that can be drawn inside ( through ) the
    object.
  • The major axis endpoints (x1,y1) and (x2,y2) are
    found by computing the pixel distance between
    every combination of border pixel points and
    finding the pair with longest distance.
  • Major Axis Length This is the pixel distance
    between the major axis endpoints. It is defined
    as
  • length SQRT( x2-x1) 2 ( y2-y1) 2 )
  • Major Axis Angle It is the angle between the
    major axis and horizontal axis. This can be
    from 0 0 - 360 0 .
  • angle tan -1 ( y2 - y1)/( x2-x1)
  • where (x1,y1) and (x2,y2) are major axis endpoints

(x2,y2)
?
(x1,y1)
32
  • Minor Axis Minor axis is perpendicular to the
    major axis. The minor axis endpoints (x3,y3) and
    (x4,y4) are found by computing the pixel distance
    between two border pixel endpoints.
  • If there is a choice, the largest distance must
    be selected.
  • (x3,y3) and (x4,y4) are the end points on the
    boundary of the image.
  • Minor Axis Width It is the distance between
    minor axis endpoints. The result is a measure of
    object width.
  • minor axis width SQRT ( x4-x3) 2 ( y4 -
    y3) 2
  • Ratio of Minor Axis Width to Major Axis Length
    This ratio is

(x4,y4)
horizontal axis
(x3,y3)
33
  • computed as the minor axis width distance
    divided by the major axis length. The result is a
    measure measure of elongation. The value of this
    ratio is between 0 and 1.
  • Boundary Box area the smallest area that would
    surround the object. It is defined by a rectangle
    formed out of minor axis width and major axis
    length.
  • BBA major axis length x minor axis width
  • Other features could include
  • Number of holes inside the object
  • Total number of connected objects
  • Center of gravity
  • Chain codes ( commonly used) and DCCs
  • Fourier descriptors ( very commonly used)
  • Object Signature ( very commonly used )

34
  • It is also suggested to use relative values of
    the angles and the normalized distances. This
    makes these features as invariant to scaling and
    rotations ( ratios are ideal feature measures).
Write a Comment
User Comments (0)
About PowerShow.com