Title: Pattern Recognition: Statistical and Neural
1Nanjing University of Science Technology
Pattern RecognitionStatistical and Neural
Lonnie C. Ludeman Lecture 12 Sept 30, 2005
2Lecture 11 Topics
1. M-Class Case and Gaussian Review 2. M-Class
Case in Likelihood Ratio Space 3. Example Vector
Observation M-Class
3MPE and MAP Decision Rule M-Class Case
for observed x
Select class Ck
if p( x Ck ) P(Ck ) gt p( x Cj ) P(Cj )
for all j 1, 2, , M
j k
if equality then decide x from the boundary
classes by random choice
Review 1
4Bayes Decision Rule M-Class Case
M
yi(x) Cij p(x Cj) P(Cj)
j1
if yi(x) lt yj(x) for all j i Then decide
x is from Ci
Review 2
5Optimum Decision Rule 2-Class Gaussian
C1
if
gt
T1
- (x M1)TK1-1(x M1) (x M2)TK2-1(x M2)
lt
C2
Quadratic Processing
1 2
K1
T1 2 ln(T ) 2 lnT ln - ln
where
K1
K2
1 2
K2
And T is the optimum threshold for the type of
performance measure used
Review 3
6 2-Class Gaussian Special Case 1 K1 K2
K
Equal Covariance Matrices
C1
gt
T2
if ( M1 M2)T K-1 x
lt
C2
Linear Processing
T2 ln T ½ ( M1T K-1 M1 M2T K-1 M2)
And T is the optimum threshold for the type of
performance measure used
Review 4
7 2-Class Gaussian Case 2 K1 K2 K s2 I
Equal Scaled Identity Covariance Matrices
C1
gt
T3
if ( M1 M2)T x
lt
C2
Linear Processing
T3 s2 ln T ½ ( M1T M1 M2T M2)
And T is the optimum threshold for the type of
performance measure used
Review 5
8M- Class General Gaussian - Continued
Equivalent statistic Qj(x) for j 1, 2, , M
Select Class Cj if Qj(x) is MINIMUM
Qi(x) (x Mj)TKj-1(x Mj) 2 ln P(Cj) ln
Ki
2
dMAH(x , Mj)
Bias
Quadratic Operation on observation vector x
Review 6
9M-Class Gaussian
Case 1 K1 K2 KM K
Equivalent Rule for MPE and MAP
Select Class Cj if Lj(x) is MAXIMUM
Lj(x) MjTK-1x ½ MjT K-1Mj lnP(Cj)
Dot Product
Bias
Linear Operation on observation vector x
Review 7
10Bayes Decision Rule in Likelihood ratio space
M-Class Case derivation
We know that Bayes Decision Rule for the M-Class
Case is
if yi(x) lt yj(x) for all j i Then decide
x is from Ci
M
where yi(x) Cij p(x Cj) P(Cj)
j1
11Dividing through by p(x CM) gives sufficient
statistics vi(x) as follows
LM(x) p(x CM) / p(x CM) 1
Therefore the decisin rule becomes
12Bayes Decision Rule in the Likelihood Ratio Space
The dimension of the Likelihood Ratio Space is
always one less than the number of classes
13Example M-Class case
Given Three Classes C1, C2, and C3
Nk are statistically independent all classes Nk
N(0,1)
14This problem is an abstraction of a tri-phase
communication system
Determine
(a) Find the minimum probability of error (MPE)
decision rule (b) Illustrate your decision
regions in the observation space (c) Use your
decision rule to classify the observed pattern
vector x 0.4, 0.7T (d) Calculate
the probability of error P(error)
15Solution
(a) Find MPE decision Rule
Problem is Gaussian with equal scaled identity
Covariance Matrices so the optimum decision rule
is as follows
16/
Select class with minimum Li(x)
for our example we have
17Droping the -½ ln 1/3 as it appears in all the
Li(x), the new statistics s1(x), s2(x), and
s3(x)can be defined as
and an equivalent decision rule becomes
18This decision rule can be rewritten in terms of
the observation x as follows
where in the observation space X, Rk is the
region where Ck is decided
19Decision Region in the Observation Space
X Observation Space
20(c) the pattern vector x
x x1, x2 T 0.4, 0.7 T
x Is a member of R1 therefore x is classified as
coming from class C1
21(d) Determine the probability of error
P(error) 1- P(correct) 1 -
P(correct C1)P(C1 )
- P(correct C2)P(C2 ) - P(correct C3)P(C3 )
22(No Transcript)
23(No Transcript)
24P(error) 0.42728
25Summary
1. M-Class Case and Gaussian Review 2. M-Class
Case in Likelihood Ratio Space 3. Example Vector
Observation M-Class
26End of Lecture 12