Title: Multi-view image stitching
1Multi-view image stitching
- Guimei Zhang
- MESA (Mechatronics, Embedded Systems and
Automation)LAB - School of Engineering,
- University of California, Merced
- E guimei.zh_at_163.com Phone209-658-4838
- Lab CAS Eng 820 (T 228-4398)
June 16, 2014. Monday 400-600 PM Applied
Fractional Calculus Workshop Series _at_ MESA Lab _at_
UCMerced
2Introduction
- Why to do some work on stitching?
- Generate one panoramic image from a series
of smaller, overlapping images. - The stitched image can also be of higher
- resolutions than a panoramic image acquired
by - a panoramic camera. The other is a
panoramic - camera is more expensive.
3Introduction
4Introduction
- Applications
- interactive panoramic viewing of
images,architectural walk-through,multi-node
movies and other applications associated with
modelling the 3D environment using images
acquired from the real world (digital surface
models/digital terrain models/ true orthophoto
and full 3D models)
5Introduction
true orthophoto and full 3D models
6Introduction
- What is multi-view?
- capture images at different time, at
different view points or using different sensor,
such as camera, laser scanner, radar,
multispectral camera..
72. Method
The flowchart of producing a panoramic image
8Introduction
- The main work is as following
- 1. image acquired
- 2.How to perform image effective registration
- 3. How to fulfill image merging
9Introduction
- Image acquisition
- Use one camera, at different time or different
viewpoints, to capture images, so there is
rotation or translation transformation or both of
them. (R,T) - Use several cameras located in different
viewpoint to capture - images. (R,T)
- Use different sensors, such as camera, laser
scanner, radar, or - multispectral scanner .(fuse multi sensor
information)
10Introduction
Geometry of overlapping images
Camera and tripod for acquisition by camera
rotations
Need perform coordination transformation
11Introduction
- Since the orientation of each imaging plane
is different in acquisition method.
Therefore, images acquired need to be
projected onto the same surface, such as the
surface of a cylinder or a sphere, before image
registration can be performed. - that means we have to perform coordinate
transformation.
12Introduction
- Image registration
- To form a larger image with a set of
overlapping images, it is necessary to find
the transformations matrix(usually rigid
transformation, only parameters R and T) to
align the images. The process of image
registration aims to find the transformations
matrix to align two or more overlapping
images. Because the projection( from the view
point through any position in the aligned
images into the 3D world) is unique.
13Introduction
5th frame image
6th frame image
Registrated image
14Multi-view point clouds registration and
stitching based on SIFT feature
- Motivation
- Method
- Experiments
- Conclusion
- Discussion
SIFT scale invariant feature transform
151. Motivation
- Problems (Existed method for multi-view point
clouds registration in large scene) - Be restricted by the view angle of camera,
single or two viewpoint image can only obtain
local information of local scene
161. Motivation
- Existed methods need to add special markers in
large reconstruction scenes. - Existed method also need ICP(iterative closest
point ) iterating calculation, and cant
eliminate interference of holes and invalid 3D
point clouds.
172. Method
- Our work based on Bendels8 work, put forward a
new algorithm of multi-view registration and
stitching. - Generation 2D texture image of effective point
- clouds
- we extract SIFT features and match them in
- 2D effective texture image
182. Method
- 3. Then we map the extracted SIFT features and
matching relationship into the 3D point clouds
data and obtain features and matching
relationship of multi-view 3D point clouds - 4. Finally we achieve multi-view point clouds
stitching.
192. Method
- 2.1 Generation texture image of effective point
- clouds data
- Why 3D point clouds cant avoid holes and noise,
in order to decrease the effect of registration
and stitching precision about multi-view point
clouds, we use mutual mapping method between 3D
point clouds and 2D texture image to obtain
texture image of effective point clouds data.
202. Method
- 2.1 Generation texture image of effective point
- clouds data
- How Firstly, we projected the 3D point clouds to
2D plane, secondly, used projection binary graph
to make 8 neighborhood seed filling and area
filling, so that we can obtain projection graph
of effective point clouds data.
212. Mehtod
2.1 Generation texture image of effective point
clouds data
Texture image of effective point clouds data of a
scene
222. Mehtod
- 2.2 Extraction and matching with SIFT feature
- Extract SIFT feature
- SIFT is a local feature which is proposed by
David7, we can extract the SIFT feature which
is invariant under translation, scale and
rotation. This paper used SIFT algorithm to
extract 2D features, then used RANSAC method9to
eliminate error matching.
232. Mehtod
- 2. 3 3D feature point extraction
- Pixel point of effective texture image,
- which is obtained by point clouds
- texture mapping has one-to-one
- correspondence relationship
- to 3D point clouds10, as shown in
- Fig.
correspondence relationship
242. Method
- the method is as follows
- We extract SIFT feature points in 2D texture
image, - then calculated the coordinate of the point.
- (2) Because 2D feature points and 3D point
clouds - have one-to-one correspondence relationship, we
can - calculate coordinate of corresponding feature
points of 3D - point clouds.
252. Mehtod
- 2.4 3D point clouds stitching
- The Multi-view 3D point clouds stitching is to
make coordinate transformation of point clouds in
different coordinate systems, the main problem is
to estimate the coordinate transformation R
(rotation matrix) and T(translation matrix).
262. Method
- According to the matching point pairs which are
obtained through the above step, we can estimate
coordinate transformation relationship, that is
to - estimate parameters R and T, which make the
objective function get minimum
Where pi and qi are matching points pairs of 3D
point clouds in two consecutive viewpoint.
273. Experiments
(a) SIFT feature effective texture image 1
(b) effective texture image 2
(c) SIFT feature of 3D cloud points 1
(d) SIFT feature of 3D cloud points 2
283. Experiments
Ref 8
Ref 1
Our method
Experimental results
293. Experiments
- Performances evaluation criteria
- Accuracy registration rate and stitching error
rate - Efficiency time consume
303. Experiments
31(No Transcript)
32(No Transcript)
334. Conclusion
- Use SIFT feature of effective texture image to
achieve registration and stitching of dense
multi-view point clouds, we obtain texture image
of effective point clouds through mutual mapping
between 3D point clouds and 2D texture image,
this algorithm eliminate interference of holes
and invalid point clouds
344. Conclusion
- Ensure that effective 3D feature which
corresponding to every 2D feature can be found in
the 3D point clouds, it can eliminate the
unnecessary error matching, so matching
efficiency and matching precision have been
improved
353. Our algorithm use the correct matching point
pairs to stitch, so it can avoid stepwise
iterative of ICP algorithm, and decrease
computational complexity of matching, it can also
reduce stitching error which is brought by
error matching.
36(No Transcript)
37