Locally Linear Embedding LLE - PowerPoint PPT Presentation

1 / 22
About This Presentation
Title:

Locally Linear Embedding LLE

Description:

High dimensional data appears frequently in statistical pattern recognition ... used in audiovisual speech synthesis. and in visual pattern recognition. 12/4/09. 6 ... – PowerPoint PPT presentation

Number of Views:1063
Avg rating:3.0/5.0
Slides: 23
Provided by: maximg8
Category:

less

Transcript and Presenter's Notes

Title: Locally Linear Embedding LLE


1
Locally Linear Embedding (LLE)
  • L.K. Saul S.T Roweis
  • Presented by M. George Haddad
  • Under the supervision of Dr. Barbara Hammer

2
Contents
  • Introduction
  • What is LLE? and what can it do?
  • The algorithm
  • Computing neighbors
  • Computing the weight matrix
  • Computing the projections
  • Complexity
  • Some nice examples
  • Final
  • References

3
1.Introduction
  • High dimensional data appears frequently in
    statistical pattern recognition
  • But the further processing does not require all
    the properties of the data
  • We could reduce the dimension without hardly
    affecting the relevant features of the data
  • less dimensionality ?less required
    space less processing time

4
2.What is LLE? and what can it do?
  • LLE is a dimensionality reduction algorithm
  • Like PCAMDS
  • It is a eigenvector method
  • It models linear variabilities in high
    dimensional data
  • Simple to implement
  • Its optimizations do not get into local minima

5
2.What is LLE? and what can it do?
  • The projection of LLE identifies the underlying
    structures of the manifold
  • PCA metric MDS project faraway points to
    nearby points
  • capable of generating highly nonlinear embeddings
  • based on simple geometric intuitions
  • used in audiovisual speech synthesis
  • and in visual pattern recognition

6
An introductory example the original shape
7
An introductory example Cntd. the sampled data
8
An introductory example Cntd. the LLE output
9
3.The Algorithm
  • Compute the neighbors of each data point Xi
  • Compute the weights Wij that best reconstruct
    each data point Xi from its neighbors by
    minimizing the reconstruction error rate
  • Compute the vectors Yi best reconstructed by the
    weights Wij again by minimizing the error rate

10
3.1.Computing the neighbors
  • There are many ways of determining the neighbor
    points of X
  • assuming a fixed number N of neighbors for each
    point and compute them (compute the N nearest
    points to X)
  • choosing all the points within a ball of fixed
    radius (X is the center of the ball)

11
3.2.Computing the weigh matrix
  • We should minimize the reconstruction error which
    is represented by the equation

12
3.2.Computing the weigh matrix(mathematical
stuff)
  • Constrainted Least Squares Problem

13
3.2.Computing the weigh matrixCntd.
  • The sum of weights for each point should be 1
  • You can achieve this task in different ways e.g.
  • Constraint satisfaction
  • A neural network
  • A genetic Algorithm!
  • Eigenvectors (it comes from the German word
    Eigenvektoren ) )

14
3.3.Computing the projections
  • Every original data point Xi is mapped to a low
    dimensional vector Yi
  • Then we calculate the Yi by using the Wij from
    the last step and minimizing the following
    embedding cost function

15
3.3.Computing the projections(mathematical stuff)
  • Eigenvectors

16
3.4.Complexity
  • Step 1 the worst case O(DN²) K-D Trees
    O(N logN)
  • Step 2 with CLSP O(DNK³)
  • Step 3 with eigenvectors O(dK)
  • Where
  • D the dimensionality of the original data
  • N the number of data points
  • K the number of the equations to solve the matrix
  • d the embedded data dimensionality

17
4.1.Example 1
18
4.2.Example 2
19
4.3.The effect of the number of neighbors
20
4.3.The effect of the number of neighbors Cntd.
21
5.final
  • When the distances between the points are the
    important factor in your pattern recognition
    algorithm
  • and you do not want your algorithm to take hours
    to be ready but your computer is a serial slow
    machine
  • Then apply LLE before you start!

22
6.References
  • An Introduction to Locally Linear Embedding
    (L.K.Saul S.T.Roweis)
  • Think globally, fit locally Unsupervised
    Learning of Low Dimensional Manifolds
  • Journal of Machine Learning Research 4 (2003)
    119-155
  • A mathematical resource Lineare Algebra Prof.Dr.
    Vogt WS02/03 Skript Universität Osnabrück
Write a Comment
User Comments (0)
About PowerShow.com