Title: Face%20Recognition%20using%20Tensor%20Analysis
1Face Recognition using Tensor Analysis
- Presented by
- Prahlad R Enuganti
2Face Recognition
- Why is it necessary?
- Human Computer Interaction
- Authentication
- Surveillance
- Problems include change in
- Illumination
- Expression
- Pose
- Aging
3Existing Techniques
Technique Resistance to Variations in Resistance to Variations in Resistance to Variations in
Pose Illumination Expression
EigenFaces Turk et al., 1991 Average Poor Average
Support Vector Machines Guo et al., 2001 Good Good Good
Multi-resolution analysis Ekenel et al., 2005 Good Good Very Good
TensorFaces Vasilescu et al., 2004 Very Good Very Good Very Good
4Tensor Algebra Vasilescu et al., 2002
- Higher order generalization of vectors and
matrices. - An Nth order tensor is represented as
- A ? R I1 x I2 x. IN and each element by
aijk.N - The mode n vectors of a tensor are obtained by
varying index n while keeping other indices
fixed. They are obtained by flattening the tensor
A and are represented by A(n)
Example of flattening a 3rd order tensor
5Tensor Decomposition
- In case of 2-D, a matrix D can be decomposing
using SVD - D U1 ? U2T , where
- ? is a diagonal singular matrix,
- U1 and U2 are column and row orthogonal space
respectively - In terms of mode - n vectors, the product can be
rewritten as - D (? ) X1 (U1) X2 (U2)
- In case of a Tensor of dimension N, the N-mode
SVD can be expressed as - D (Z ) X1 (U1) X2 (U2) XN (UN)
- Where Z is known as the core tensor and is
analogous to diagonal singular value matrix in
2-D SVD
6N mode SVD Algorithm
- For n 1 , 2 N, compute matrix Un by
calculating the SVD of flattened matrix D(n) and
setting Un to be the left matrix of the SVD. - Core Tensor can be solved as
- Z (D) X1 (U1T) X2 (U2T) ...... XN (UNT)
7TensorFaces
- Our data here consists of 5 variables people,
pixels, pose, illumination and expression. - Therefore we perform the N mode decomposition of
the 5th order tensor and obtain - D Z X1 Upeople X2 Uviews X3 Uillum X4 Uexpr X5
Upixels - The main advantage of tensor analysis is that it
maps all images of a person regardless of other
variables to the same coefficient vector giving
zero inter-class scatter.
8ISOMAP (Isometric Feature Mapping)Tenenbaum et
al.
- Finds meaningful low-dimensional manifold of
higher dimensional data by preserving the
geodesic distances. - Unlike PCA or MDS, ISOMAP is capable of
discovering even the nonlinear degrees of
freedom. - It is guaranteed to converge to the true
structure.
9ISOMAP How does it work?
- Calculates the weighted neighborhood graph for
every point by either the ? neighborhood rule or
the k nearest neighbor rule. - Estimates the geodesic distances between all
pairs of points on the lower dimensional manifold
by computing the shortest path distances in the
graph - Applies classical MDS to construct an embedding
in the lower dimensional space that best
preserves the manifolds estimated geometry.