Title: Facial Recognition as a Pattern Recognition Problem
1Facial Recognition as a Pattern Recognition
Problem
- Hongcheng Wang
- Beckman Institute, UIUC
2Contents
- Why Face Recognition?
- Face Recognition Approaches
- Eigenface
- FisherFace
- SVM, LLE, IsoMAP, NN,
- Experimental Results
- Conclusions
3Why Face Recognition?
- Non-intrusive.
- Growing Interest in biometric authentication
- Data required is easily obtained and readily
available.
4Face Recognition Approaches
- Eigenface
- Identify discriminating components
- Fisherface
- Uses within-class information to maximise class
separation - Kernel Eigenface vs. Kernel Fisherface
- Take higher order correlations into account
- LLE, IsoMAP, Charting, SVM, Neural networks
5Eigenface (Turk, Pentland, 91) -1
- Use Principle Component Analysis (PCA) to
determine the most discriminating features
between images of faces.
6Eigenface -2
- Create an image subspace (face space) which best
discriminates between faces. - Like faces occupy near points in face space.
- Compare two faces by projecting the images into
faces pace and measuring the Euclidean distance
between them.
7Eigenface -3
- Similarly the following 1x2 pixel images are
converted into the vectors shown. - Each image occupies a different point in image
space. - Similar images are near each other in image
space. - Different images are far from each other in image
space.
8Eigenface -4
- A 256x256 pixel image of a face occupies a single
point in 65,536-dimensional image space. - Images of faces occupy a small region of this
large image space. - Similarly, different faces should occupy
different areas of this smaller region. - We can identify a face by finding the nearest
known face in image space.
9Eigenface -5
- However,
- even tiny changes in lighting, expression or head
orientation cause the location in image space to
change dramatically. - Large amounts of storage is required.
- What should we do?
Dimensionality Reduction!
10Eigenface -6
- Training Step
- Align a set of face images (the training set T1,
T2, TM ) - Compute the average face image ? 1/M S TM
- Compute the difference image for each image in
the training set fi Ti ? - Compute the covariance matrix of this set of
difference images C 1/MS fn fTn - Compute the eigenvectors of the covariance matrix
11Eigenface -7
- Here is an example of visualization of
eigenvectors of the covariance matrix (which are
called EIGENFACEs)
- These are the first 4 eigenvectors, from a
training set of 23 images. - Only selecting the top M eigenfaces ? reduces
the dimensionality. - Too few eigenfaces results in too much
information loss, and hence less discrimination
between faces.
12Eigenface -8
- Recognition Step
- Projection in Eigenface
- Projection ?i W (T ?)
- W eigenfaces
- Compare projections
13Eigenface -9
- Matlab code
- load faces.mat
- mn mean(c')' compute mean of each class
- for i1P
- msc(,i) c(,i) - mn
- end
- Find an orthonormal basis using k-l
- cov msc msc' covariance matrix
- V,D eig(cov) vects V
- Order the eigenvects according to the
eigenvalues - for i1N evals(i) D(i,i) end
- a,bsort(evals)
- for i1N
- ind b(N-i1)
- if (a(N-i1) gt tol) non-zero eigenvalues
- u(,i) vects(,ind) ev(i)
D(ind,ind) - end
- end
- proju'msc
Matlab code test reshape(double(imread('images/r
3.jpg')),N1N2,1) mst test - mn subtracting
the mean image projt u'mst project into the
same eigenspace define by u similarity
measures using the L2 norm - Euclidean norm or
the Euclidean distance proj_train u'(c- mn mn
mn mn) diff proj_train - projt projt projt
projt Lzeros(1,4) for j14 for i13
L(j) L(j) (diff(i, j))2
end end a b sort(L) fprintf('the image
what we found is ') display(b(1))
14Eigenface -10
- Advantages
- Discover structure of data lying near a linear
subspace of the input space - Non-iterative, globally optimal solution
- Disadvantages
- Not capable of discovering nonlinear degrees of
freedom - Registration and scaling issues
- Very sensitive to changes in lighting conditions
- The scatter being maximized is not only due to
the between-class scatter that is useful for
classification, but also the within-class scatter
which is unwanted information for classification
purposes So PCA projection is optimal for
reconstruction from a low dimensional basis, but
may not be optimal for discrimination
15Fisherface -1
- Using Linear Discriminant Analysis (LDA) or
Fishers Linear Discriminant (FLD) - Eigenfaces attempt to maximise the scatter of the
training images in face space, while Fisherfaces
attempt to maximise the between class scatter,
while minimising the within class scatter. In
other words, moves images of the same face closer
together, while moving images of difference faces
further apart. - Eigenfaces vs. Fisherfaces recognition using
class specific linear projection by Belhumeur
Kriegman, 1997
16Fisherface -2
- Poor Projection
Good Projection
17FisherFace - 3
- N Sample images
- C classes
- Average of each class
- Total average
18FisherFace - 4
19FisherFace - 5
- After projection
- Between class scatter (of ys)
- Within class scatter (of ys)
20FisherFace - 6
21FisherFace - 7
22FisherFace -8
- Data dimension is much larger then the number of
Samples - The matrix is singular
23FisherFace
PCAFLD
- Project with PCA to space
- Project with FLD to space
24FisherFace -10
- As in the Eigenfaces, the vectors of Wopt are of
dimension d, and can thus be thought of as
images, or Fisherfaces
25FisherFace -11
ignore the zero eigens and sort in descend
order nz_eval_ind find(evalgt0.0001) nz_eval
eval(nz_eval_ind) for i1length(nz_eval_ind)
nz_evec(,i) evec(,nz_eval_ind(i)) end seva
l,Ind sort(nz_eval) Ind fliplr(Ind) buld
the eigen space vones(p,N-1) for i1N-1
v(,i) nz_evec(,Ind(i)) for j1p
v(j,i)real(v(j,i)) end end project images
onto the eigen space Y zeros(N-1,NM) for
i1NM Y(,i) v'X(,i) end
N 4 Input how many classes are there M 6
Input how many images are in each class p
Input how many pixels are in one image load
face read in the training images calculate the
mean for each class m zeros(p,N) Sw
zeros(p,p) Sb zeros(p,p) for i 1N
m(,i) mean(X(,((i-1)M1)iM)')' S
zeros(p,p) calculate the within class scatter
matrix for j ((i-1)M1)iM S S
(X(,j)-m(,i))(X(,j)-m(,i))' end
Sw SwS calculate the between class
scatter matrix Sb Sb(m(,i)-total_m)(m(,i
)-total_m)' end calculate the generalized
eigenvectors and eigen values evec,eval
eig(Sb,Sw) e real(evec)
26Experimental Results - 1
Variation in Facial Expression, Eyewear, and
Lighting
- Input 160 images of 16 people
- Train 159 images
- Test 1 image
With glasses
Without glasses
3 Lighting conditions
5 expressions
27Experimental Results - 2
28Conclusion
- All methods perform well when conditions do not
change much - Fisherfaces has best results
29The End
Thank You !