SVD(Singular Value Decomposition) and Its Applications - PowerPoint PPT Presentation

About This Presentation
Title:

SVD(Singular Value Decomposition) and Its Applications

Description:

If we are lucky: A = V VT, V orthogonal (true if A is symmetric) ... The eigen decomposition of A tells us which orthogonal axes it scales, and by how much: ... – PowerPoint PPT presentation

Number of Views:1729
Avg rating:3.0/5.0
Slides: 59
Provided by: OS7
Category:

less

Transcript and Presenter's Notes

Title: SVD(Singular Value Decomposition) and Its Applications


1
SVD(Singular Value Decomposition) and Its
Applications
  • Joon Jae Lee
  • 2006. 01.10

2
The plan today
  • Singular Value Decomposition
  • Basic intuition
  • Formal definition
  • Applications

3
LCD Defect Detection
  • Defect types of TFT-LCD Point, Line, Scratch,
    Region

4
Shading
  • Scanners give us raw point cloud data
  • How to compute normals to shade the surface?

normal
5
Hole filling
6
Eigenfaces
  • Same principal components analysis can be applied
    to images

7
Video Copy Detection
  • same image content different source
  • same video source different image content

AVI and MPEG face image
Hue histogram
Hue histogram
8
3D animations
  • Connectivity is usually constant (at least on
    large segments of the animation)
  • The geometry changes in each frame ? vast amount
    of data, huge filesize!

13 seconds, 3000 vertices/frame, 26 MB
9
Geometric analysis of linear transformations
  • We want to know what a linear transformation A
    does
  • Need some simple and comprehendible
    representation of the matrix of A.
  • Lets look what A does to some vectors
  • Since A(?v) ?A(v), its enough to look at
    vectors v of unit length

A
10
The geometry of linear transformations
  • A linear (non-singular) transform A always takes
    hyper-spheres to hyper-ellipses.

A
11
The geometry of linear transformations
  • Thus, one good way to understand what A does is
    to find which vectors are mapped to the main
    axes of the ellipsoid.

A
12
Geometric analysis of linear transformations
  • If we are lucky A V ? VT, V orthogonal
    (true if A is symmetric)
  • The eigenvectors of A are the axes of the ellipse

13
Symmetric matrix eigen decomposition
  • In this case A is just a scaling matrix. The
    eigen decomposition of A tells us which
    orthogonal axes it scales, and by how much

14
General linear transformations SVD
  • In general A will also contain rotations, not
    just scales

?1
1
1
?2
A
15
General linear transformations SVD
?1
1
1
?2
A
orthonormal
orthonormal
16
SVD more formally
  • SVD exists for any matrix
  • Formal definition
  • For square matrices A ? Rn?n, there exist
    orthogonal matrices U, V ? Rn?n and a diagonal
    matrix ?, such that all the diagonal values ?i of
    ? are non-negative and


17
SVD more formally
  • The diagonal values of ? (?1, , ?n) are called
    the singular values. It is accustomed to sort
    them ?1 ? ?2? ? ?n
  • The columns of U (u1, , un) are called the left
    singular vectors. They are the axes of the
    ellipsoid.
  • The columns of V (v1, , vn) are called the right
    singular vectors. They are the preimages of the
    axes of the ellipsoid.


18
SVD is the working horse of linear algebra
  • There are numerical algorithms to compute SVD.
    Once you have it, you have many things
  • Matrix inverse ? can solve square linear systems
  • Numerical rank of a matrix
  • Can solve least-squares systems
  • PCA
  • Many more

19
Matrix inverse and solving linear systems
  • Matrix inverse
  • So, to solve

20
Matrix rank
  • The rank of A is the number of non-zero singular
    values

n
?1
?2
?n
m

21
Numerical rank
  • If there are very small singular values, then A
    is close to being singular. We can set a
    threshold t, so that numeric_rank(A) ?i ?i
    gt t
  • If rank(A) lt n then A is singular. It maps the
    entire space Rn onto some subspace, like a plane
    (so A is some sort of projection).

22
Solving least-squares systems
  • We tried to solve Axb when A was rectangular
  • Seeking solutions in least-squares sense

b
A
x

23
Solving least-squares systems
  • We proved that when A is full-rank,
    (normal equations).
  • So

?12
?1
?1

?n
?n2
?n
24
Solving least-squares systems
  • Substituting in the normal equations

1/?1
1/?12
?1

1/?n2
?n
1/?n
25
Pseudoinverse
  • The matrix we found is called the pseudoinverse
    of A.
  • Definition using reduced SVD

If some of the ?i are zero, put zero instead of
1/?I .
26
Pseudo-inverse
  • Pseudoinverse A exists for any matrix A.
  • Its properties
  • If A is m?n then A is n?m
  • Acts a little like real inverse

27
Solving least-squares systems
  • When A is not full-rank ATA is singular
  • There are multiple solutions to the normal
    equations
  • Thus, there are multiple solutions to the
    least-squares.
  • The SVD approach still works! In this case it
    finds the minimal-norm solution

28
PCA the general idea
  • PCA finds an orthogonal basis that best
    represents given data set.
  • The sum of distances2 from the x axis is
    minimized.

y
y
x
x
29
PCA the general idea
y
v2
v1
x
y
y
x
x
The projected data set approximates the original
data set
This line segment approximates the original data
set
30
PCA the general idea
  • PCA finds an orthogonal basis that best
    represents given data set.
  • PCA finds a best approximating plane (again, in
    terms of ?distances2)

z
3D point set in standard basis
y
x
31
For approximation
  • In general dimension d, the eigenvalues are
    sorted in descending order
  • ?1 ? ?2 ? ? ?d
  • The eigenvectors are sorted accordingly.
  • To get an approximation of dimension d lt d, we
    take the d first eigenvectors and look at the
    subspace they span (d 1 is a line, d 2 is a
    plane)

32
For approximation
  • To get an approximating set, we project the
    original data points onto the chosen subspace
  • xi m ?1v1 ?2v2 ?dvd ?dvd
  • Projection
  • xi m ?1v1 ?2v2 ?dvd 0?vd1 0?
    vd

33
Principal components
  • Eigenvectors that correspond to big eigenvalues
    are the directions in which the data has strong
    components ( large variance).
  • If the eigenvalues are more or less the same
    there is no preferable direction.
  • Note the eigenvalues are always non-negative.
    Think why

34
Principal components
  • Theres no preferable direction
  • S looks like this
  • Any vector is an eigenvector
  • There is a clear preferable direction
  • S looks like this
  • ? is close to zero, much smaller than ?.

35
Application finding tight bounding box
  • An axis-aligned bounding box agrees with the
    axes

y
maxY
minX
maxX
x
minY
36
Application finding tight bounding box
  • Oriented bounding box we find better axes!

x
y
37
Application finding tight bounding box
  • Oriented bounding box we find better axes!

38
Scanned meshes
39
Point clouds
  • Scanners give us raw point cloud data
  • How to compute normals to shade the surface?

normal
40
Point clouds
  • Local PCA, take the third vector

41
Eigenfaces
  • Same principal components analysis can be applied
    to images

42
Eigenfaces
  • Each image is a vector in R250?300
  • Want to find the principal axes vectors that
    best represent the input database of images

43
Reconstruction with a few vectors
  • Represent each image by the first few (n)
    principal components

44
Face recognition
  • Given a new image of a face, w ? R250?300
  • Represent w using the first n PCA vectors
  • Now find an image in the database whose
    representation in the PCA basis is the closest

The angle between w and w is the smallest
w
w
w
w
45
SVD for animation compression
Chicken animation
46
3D animations
  • Each frame is a 3D model (mesh)
  • Connectivity mesh faces

47
3D animations
  • Each frame is a 3D model (mesh)
  • Connectivity mesh faces
  • Geometry 3D coordinates of the vertices

48
3D animations
  • Connectivity is usually constant (at least on
    large segments of the animation)
  • The geometry changes in each frame ? vast amount
    of data, huge filesize!

13 seconds, 3000 vertices/frame, 26 MB
49
Animation compression by dimensionality reduction
  • The geometry of each frame is a vector in R3N
    space (N vertices)

3N ? f
50
Animation compression by dimensionality reduction
  • Find a few vectors of R3N that will best
    represent our frame vectors!

VT f?f
? f?f
U 3N?f
VT

51
Animation compression by dimensionality reduction
  • The first principal components are the important
    ones

u1
u2
u3

52
Animation compression by dimensionality reduction
  • Approximate each frame by linear combination of
    the first principal components
  • The more components we use, the better the
    approximation
  • Usually, the number of components needed is much
    smaller than f.

u3
u1
u2

?1
?2
?3
53
Animation compression by dimensionality reduction
ui
  • Compressed representation
  • The chosen principal component vectors
  • Coefficients ?i for each frame

Animation with only 2 principal components
Animation with 20 out of 400 principal components
54
LCD Defect Detection
  • Defect types of TFT-LCD Point, Line, Scratch,
    Region

55
Pattern Elimination Using SVD
56
Pattern Elimination Using SVD
  • Cutting the test images

57
Pattern Elimination Using SVD
  • Singular value determinant for the image
    reconstruction

58
Pattern Elimination Using SVD
  • Defect detection using SVD
Write a Comment
User Comments (0)
About PowerShow.com