Title: Shape Classification Based on Skeleton Path Similarity
1Shape Classification Based on Skeleton Path
Similarity
- Xingwei Yang, Xiang Bai, Deguang Yu, and Longin
Jan Latecki
2Shape similarity
- The goal of the shape similarity is using the
information of the shape to recognize it like
humans perception.
3Two main kinds of methods
- Contour
- Advantage it is easy to implement and similar to
humans perception. - Disadvantage it will meet problems on the
articulated shapes.
- Skeleton
- Advantage it is not sensitive to the distortion
of the articulated shape compared to contour. - Disadvantage it is sensitive to the noise of
contour.
4The main content of this talk
- 1. obtaining the skeleton which is insensitive to
the contour noise. - 2. introducing a skeleton representation.
- 3. Implementing the skeleton into Bayesian
classifier to recognize shapes. - 4. show results and discuss
5Skeleton Pruning by Contour Partitioning with
DiscreteCurve Evolution
- 1.1 The need for skeleton pruning
6Skeleton Pruning by Contour Partitioning with
DiscreteCurve Evolution
- 1.2 using Discrete Curve Evolution to simplify
the contour - where line segments s1, s2 are the polygon sides
incident to a common vertex v, ß(s1, s2) is the
turn angle at the common vertex v, l is the
length function.
7Skeleton Pruning by Contour Partitioning with
DiscreteCurve Evolution
8Skeleton Pruning by Contour Partitioning with
DiscreteCurve Evolution
9Skeleton representation
- The endpoint in the skeleton graph is called an
end node, and the junction point in the skeleton
graph is called a junction node. The shortest
path between a pair of end nodes on a skeleton
graph is called a skeleton path. We show a few
example skeleton paths
10Skeleton representation
11Skeleton representation
- The shortest paths between every pair of skeleton
endpoints are represented as sequences of radii
of the maximal disks at corresponding skeleton
points. In our paper, the skeleton path is
sampled to 50 points.
12Implementing the skeleton into Bayesian classifier
- The shape dissimilarity between two skeleton
paths is called a path distance. The path
distance pd between sp and sp is - ri is radius of the i th point on the skeleton
path
13Implementing the skeleton into Bayesian classifier
- Given a shape ?' that should be classified by
Bayesian Classifier, we build the skeleton graph
G(?') of ?' and input G(?') as the query. For a
skeleton graph G(?'), if the number of end nodes
is n, the corresponding number of paths is n(n-1)
. Then, the Bayesian Classifier computes the
posterior probability of all classes for each
path sp'?G(?'). By accumulating the posterior
probability of all of the paths of G(?'), the
system automatically yields the ranking of class
hypothesis.
14Implementing the skeleton into Bayesian classifier
- If two different paths have small pd value, the
value of probability should be large. Otherwise,
it should be small. Therefore, we use Gaussian
distribution to compute the probability p
15Implementing the skeleton into Bayesian classifier
- The class-conditional probability for observing
sp' given that ?' belongs to class ci is - We assume that all paths within a class path set
are equiprobable, therefore - ci is one of the M classes.
16Implementing the skeleton into Bayesian classifier
- The posterior probability of a class given that
path sp'?G(?') is determined by Bayes rule -
17Implementing the skeleton into Bayesian classifier
- Similar to the above assumption, p(ci)1/M. The
probability of sp' is equal to -
18Implementing the skeleton into Bayesian classifier
- By summing the posterior probabilities of a class
over the set of paths in the input shape, we
obtain the probability that the input shape
belongs to a given class. Obviously, the biggest
one, Cm, is the class that input shape belongs to
19Experiments
- The table is composed of 14 rows and 9 columns.
The first column of the table represents the
class of each row. For each row, there are four
experimental results which belong to the same
class. Each experimental result has two elements.
The first one is the query shape and the second
one is the classification result of our system.
If the result is correct, it should be the equal
to the first column of the row. The red numbers
mark the wrong classes assigned to query objects.
Since there is only one error in 56
classification results, the classification
accuracy in percentage by this measure is 98.2.
20Experiments
21Experiments
- The results of the proposed method
22Experiments
- The results of Super and Suns method