Title: Chamfer Distance
1- Lecture 15
- Chamfer Distance
- Intro to Object Recognition
CSE 4392/6367 Computer Vision Spring
2009 Vassilis Athitsos University of Texas at
Arlington
2Contours
- A contour is a curve/line (typically not
straight) that delineates the boundary of a
region, or between regions.
3Shapes Without Texture
- Letters/numbers.
- Contours.
- Edge templates.
4Detecting Shapes Without Texture
- Normalized correlation does not work well.
- Slight misalignments have a great impact on the
correlation score.
star3
combined
star1
5Chamfer Distance
- For each edge pixel in star1
- How far is it from the nearest edge pixel in
star3? - The average of all those answers is the directed
chamfer distance from star1 to star3.
6Chamfer Distance
- For each edge pixel in star1
- How far is it from the nearest edge pixel in
star3? - The average of all those answers is the directed
chamfer distance from star1 to star3.
7Chamfer Distance
- For each edge pixel in star1
- How far is it from the nearest edge pixel in
star3? - The average of all those answers is the directed
chamfer distance from star1 to star3.
8Chamfer Distance
- For each edge pixel in star1
- How far is it from the nearest edge pixel in
star3? - The average of all those answers is the directed
chamfer distance from star1 to star3.
9Chamfer Distance
- For each edge pixel in star3
- How far is it from the nearest edge pixel in
star1? - The average of all those answers is the directed
chamfer distance from star3 to star1.
10Directed Chamfer Distance
- Input two sets of points.
- red, green.
- c(red, green)
- Average distance from each red point to nearest
green point.
11Directed Chamfer Distance
- Input two sets of points.
- red, green.
- c(red, green)
- Average distance from each red point to nearest
green point. - c(green, red)
- Average distance from each green point to nearest
red point.
12Chamfer Distance
- Input two sets of points.
- red, green.
- c(red, green)
- Average distance from each red point to nearest
green point. - c(green, red)
- Average distance from each red point to nearest
green point.
Chamfer distance C(red, green) c(red, green)
c(green, red)
13Chamfer Distance
- On two stars
- 31 pixels are nonzero in both images.
- On star and crescent
- 33 pixels are nonzero in both images.
- Correlation scores can be misleading.
14Chamfer Distance
- Chamfer distance is much smaller between the two
stars than between the star and the crescent.
15Detecting Hands
Template.
Input image
- Problem hands are highly deformable.
- Normalized correlation does not work as well.
- Alternative use edges.
16Detecting Hands
template
window
- Compute chamfer distance, at all windows, all
scales, with template. - Which version? Directed or undirected?
- We want small distance with correct window, large
distance with incorrect windows.
17Direction Matters
window
- Chamfer distance from window to template
problems?
18Direction Matters
window
- Chamfer distance from window to template
problems? - Clutter (edges not belonging to the hand) cause
the distance to be high.
19Direction Matters
window
- Chamfer distance from template to window
problems?
20Direction Matters
window
- Chamfer distance from template to window
problems? - What happens when comparing to a window with lots
of edges?
21Direction Matters
window
- Chamfer distance from template to window
problems? - What happens when comparing to a window with lots
of edges? Score is low.
22Choice of Direction
window
- For detection, we compute chamfer distance from
template to window. - Being robust to clutter is a big plus, ensures
the correct results will be included. - Incorrect detections can be discarded with
additional checks.
23Computing the Chamfer Distance
- Compute chamfer distance, at all windows, all
scales, with template. - Can be very time consuming.
24Distance Transform
Edge image e1
Distance transform d1
- For every pixel, compute distance to nearest edge
pixel.d1 bwdist(e1)
25Distance Transform
t1
Edge image e1
Distance transform d1
- If template t1 is of size (r, c)
- Chamfer distance with a window (i(ir-1),
(j(jc-1)) of e1 can be written as
26Distance Transform
t1
Edge image e1
Distance transform d1
- If template t1 is of size (r, c)
- Chamfer distance with a window (i(ir-1),
(j(jc-1)) of e1 can be written as - Computing image of chamfer scores for one scale
window d1(i(ir-1), j(jc-1)) sum(sum(t1 .
window))
27Distance Transform
t1
Edge image e1
Distance transform d1
- If template t1 is of size (r, c)
- Chamfer distance with a window (i(ir-1),
(j(jc-1)) of e1 can be written as - Computing image of chamfer scores for one scale
window d1(i(ir-1), j(jc-1)) sum(sum(t1 .
window))
28Distance Transform
t1
Edge image e1
Distance transform d1
- Computing image of chamfer scores for one scale s
- How long does that take? Can it be more efficient?
resized imresize(image, s, 'bilinear') resized_
edges canny(resized, 7) resized_dt
bwdist(resized_edges) chamfer_scores
imfilter(resized_dt, t1, 'symmetric') figure(3)
imshow(chamfer_scores, )
29Improving Efficiency
t1
Edge image e1
Distance transform d1
- Which parts of the template contribute to the
score of each window?
30Improving Efficiency
t1
Edge image e1
Distance transform d1
- Which parts of the template contribute to the
score of each window? - Just the nonzero parts.
- How can we use that?
31Improving Efficiency
t1
Edge image e1
Distance transform d1
- Which parts of the template contribute to the
score of each window? - Just the nonzero parts.
- How can we use that?
32Improving Efficiency
t1
Edge image e1
Distance transform d1
- Which parts of the template contribute to the
score of each window? Just the nonzero parts. - How can we use that?
- Compute a list of non-zero pixels in the
template. - Consider only those pixels when computing the sum
for each window.
33Results for Single Scale Search
- What is causing the false result?
34Results for Single Scale Search
- What is causing the false result?
- Window with lots of edges.
- How can we refine these results?
35Results for Single Scale Search
- What is causing the false result?
- Window with lots of edges.
- How can we refine these results?
- Skin color, or background subtraction
36Object Recognition
- Typically, recognition is applied after objects
have been detected. - Detection where is the object?
- Recognition what is the object?
- Note that, to detect an object, we already need
to know what type of object it is. - Recognition further refines the type.
37Examples
- Faces
- Detection where is the face?
- Recognition what person is it?
38Examples
- Hands
- Where is the hand?
- What is the shape and orientation of the hand?
39Examples
- Letters and numbers.
- Detection where is the number?
- Recognition what number is it?
40Recognition Steps
- Training phase
- Build models of each class.
- Test phase
- Find the model that best matches the image.
41Nearest Neighbor Classification
- We are given a database of training examples,
whose class labels are known. - Given a test image, we find its nearest neighbor.
- We need to choose a distance measure.
42Nearest Neighbor Classification
- We are given a database of training examples,
whose class labels are known. - Given a test image, we find its nearest neighbor.
- We need to choose a distance measure.
43Choosing a Distance Measure
- Based on material we have covered so far, what
distance measures can we use?
44Choosing a Distance Measure
- Based on material we have covered so far, what
distance measures can we use? - Euclidean distance.
- Chamfer distance (directed or undirected?)