Title: Feature matching and tracking Class 5
1Feature matching and trackingClass 5
- Read Section 4.1 of course notes
- http//www.cs.unc.edu/marc/tutorial/node49.html
- Read Shi and Tomasis paper on good features to
track - http//www.unc.edu/courses/2004fall/comp/290/089/p
apers/shi-tomasi-good-features-cvpr1994.pdf - Read Lowes paper on SIFT features
- http//www.unc.edu/courses/2004fall/comp/290/089/p
apers/Lowe_ijcv03.pdf
2Feature matching vs. tracking
Image-to-image correspondences are key to passive
triangulation-based 3D reconstruction
Extract features independently and then match by
comparing descriptors
Extract features in first images and then try to
find same feature back in next view
What is a good feature?
3Comparing image regions
- Compare intensities pixel-by-pixel
I(x,y)
I(x,y)
Dissimilarity measures
Sum of Square Differences
4Comparing image regions
- Compare intensities pixel-by-pixel
I(x,y)
I(x,y)
Similarity measures
Zero-mean Normalized Cross Correlation
5Feature points
- Required properties
- Well-defined
- (i.e. neigboring points should all be
different) - Stable across views
(i.e. same 3D point should be extracted as
feature for neighboring viewpoints)
6Feature point extraction
Find points that differ as much as possible from
all neighboring points
homogeneous
edge
corner
7Feature point extraction
- Approximate SSD for small displacement ?
- Image difference, square difference for pixel
8Feature point extraction
homogeneous
edge
corner
Find points for which the following is maximum
i.e. maximize smallest eigenvalue of M
9Harris corner detector
- Use small local window
- Maximize cornerness
- Only use local maxima, subpixel accuracy through
second order surface fitting - Select strongest features over whole image and
over each tile (e.g. 1000/image, 2/tile)
10Simple matching
- for each corner in image 1 find the corner in
image 2 that is most similar (using SSD or NCC)
and vice-versa - Only compare geometrically compatible points
- Keep mutual best matches
What transformations does this work for?
11Feature matching example
0.96 -0.40 -0.16 -0.39 0.19
-0.05 0.75 -0.47 0.51 0.72
-0.18 -0.39 0.73 0.15 -0.75
-0.27 0.49 0.16 0.79 0.21
0.08 0.50 -0.45 0.28 0.99
What transformations does this work for?
What level of transformation do we need?
12Wide baseline matching
- Requirement to cope with larger variations
between images - Translation, rotation, scaling
- Foreshortening
- Non-diffuse reflections
- Illumination
geometric transformations
photometric changes
13Invariant detectors
Rotation invariant
Scale invariant
Affine invariant
14Normalization
(Or how to use affine invariant detectors for
matching)
15Wide-baseline matching example
(Tuytelaars and Van Gool BMVC 2000)
16Lowes SIFT features
(Lowe, ICCV99)
- Recover features with position, orientation and
scale
17Position
- Look for strong responses of DOG filter
(Difference-Of-Gaussian) - Only consider local maxima
18Scale
- Look for strong responses of DOG filter
(Difference-Of-Gaussian) over scale space - Only consider local maxima in both position and
scale - Fit quadratic around maxima for subpixel
19Orientation
- Create histogram of local gradient directions
computed at selected scale - Assign canonical orientation at peak of smoothed
histogram - Each key specifies stable 2D coordinates (x, y,
scale, orientation)
20Minimum contrast and cornerness
21SIFT descriptor
- Thresholded image gradients are sampled over
16x16 array of locations in scale space - Create array of orientation histograms
- 8 orientations x 4x4 histogram array 128
dimensions
22(No Transcript)
23Matas et al.s maximally stable regions
- Look for extremal regions
http//cmp.felk.cvut.cz/matas/papers/matas-bmvc02
.pdf
24Mikolaczyk and Schmid LoG Features
25Feature tracking
- Identify features and track them over video
- Small difference between frames
- potential large difference overall
- Standard approach
- KLT (Kanade-Lukas-Tomasi)
26Good features to track
- Use same window in feature selection as for
tracking itself - Compute motion assuming it is small
- Affine is also possible, but a bit harder (6x6 in
stead of 2x2)
differentiate
27Example
Simple displacement is sufficient between
consecutive frames, but not to compare to
reference template
28Example
29Synthetic example
30Good features to keep tracking
- Perform affine alignment between first and last
frame - Stop tracking features with too large errors
31Live demo
LKdemo
32Optical flow
- Brightness constancy assumption
(small motion)
possibility for iterative refinement
33Optical flow
- Brightness constancy assumption
(small motion)
the aperture problem
(1 constraint)
?
(2 unknowns)
isophote I(t1)I
isophote I(t)I
34Optical flow
- How to deal with aperture problem?
(3 constraints if color gradients are different)
Assume neighbors have same displacement
35Lucas-Kanade
Assume neighbors have same displacement
least-squares
36Revisiting the small motion assumption
- Is this motion small enough?
- Probably notits much larger than one pixel (2nd
order terms dominate) - How might we solve this problem?
From Khurram Hassan-Shafique CAP5415 Computer
Vision 2003
37Reduce the resolution!
From Khurram Hassan-Shafique CAP5415 Computer
Vision 2003
38Coarse-to-fine optical flow estimation
slides from Bradsky and Thrun
39Coarse-to-fine optical flow estimation
slides from Bradsky and Thrun
run iterative L-K
40Next class triangulation and reconstruction
m1
C1
L1
Triangulation
- calibration
- correspondences