Title: Template Learning From Atomic Representations
1Template Learning from Atomic Representations
A Wavelet-based Approach to Pattern Analysis
Clay Scott and Rob Nowak
Electrical and Computer Engineering Rice
University www.dsp.rice.edu
Supported by ARO, DARPA, NSF, and ONR
2The Discrete Wavelet Transform
- prediction errors ? wavelet coefficients
- most wavelet coefficients are zero ? sparse
representation
3Wavelets as Atomic Representations
- Atomic representations attempt to decompose
images into fundamental units or atoms - Examples wavelets, curvelets, wedgelets, DCT
- Successes denoising and compression
- Drawback not transformation invariant
- ? poor features for pattern recognition
4Pattern Recognition
Class 1
Class 2
Class 3
5Hierarchical Framework
Noisy observation of transformed pattern
Random transformation of pattern
Pattern template in spatial domain
Realization from wavelet-domain statistical model
6Wavelet-domain statistical model
- Sparsity ? can divide wavelet coefficients into
significant and insignificant coefficients
- Model wavelet coefficients as independent
Gaussian mixtures - where
-
is significant
7Model Parameters
where
- Finite set of pre-selected transformations
- model variability in location and orientation
8Pattern Synthesis
1. Generate a random template 2. Transform to
spatial domain 3. Apply random transformation
4. Add observation noise
9Template Learning
Given Independent observations of the same
pattern
arising from the (unknown) transformations
Goal Find ?, s, ? that best describe the
observations
Approach Penalized maximum likelihood estimation
(PMLE)
10PMLE of ?, s, and ?
- Complexity penalty function
- where is the number of
significant coefficients
? Minimum description length (MDL) criterion
- Complexity regularization ? Find low-dimensional
template that captures essential structure of
pattern
11TEMPLAR Template Learning from Atomic
Representations
- Simultaneously maximizing F over ?, s, ? is
intractable - Maximize F with alternating-maximization
algorithm
? Non-decreasing sequence of penalized likelihood
values ? Each step is simple, with O(NLT)
complexity ? Converges to a fixed point (no
cycling)
12Airplane Experiment
Picture of me gathering data
13Airplane Experiment
- 853 significant coefficients out of 16,384
- 7 iterations
14Face Experiment
Training data for one subject, plus sequence of
template convergence
15Why Does TEMPLAR Work?
- Wavelet-domain model for template is
low-dimensional (from MDL penalty and inherent
sparseness of wavelets) - Low-dimensional template allows for improved
pattern matching by giving more weight to
distinguishing features
16Classification
Given Templates for several patterns and an
unlabeled observation x
Classify
- Invariant to unknown transformations
- O(NT) complexity
- sparsity ? low-dimensional subspace classifier
- ? robust to background clutter
17Face Recognition
Results of Yale face test
18Image Registration
If I get results
19Conclusion
- Wavelet-based framework for representing pattern
observations with unknown rotation and
translation - TEMPLAR Linear-time algorithm for automatically
learning low-dimensional templates based using
MDL - Low-dimensional subspace classifiers that are
invariant to spatial transformations and
background clutter