Multi-view image stitching - PowerPoint PPT Presentation

About This Presentation
Title:

Multi-view image stitching

Description:

Multi-view image stitching Guimei Zhang MESA (Mechatronics, Embedded Systems and Automation)LAB School of Engineering, University of California, Merced – PowerPoint PPT presentation

Number of Views:121
Avg rating:3.0/5.0
Slides: 38
Provided by: Yang62
Category:

less

Transcript and Presenter's Notes

Title: Multi-view image stitching


1
Multi-view image stitching
  • Guimei Zhang
  • MESA (Mechatronics, Embedded Systems and
    Automation)LAB
  • School of Engineering,
  • University of California, Merced
  • E guimei.zh_at_163.com Phone209-658-4838
  • Lab CAS Eng 820 (T 228-4398)

June 16, 2014. Monday 400-600 PM Applied
Fractional Calculus Workshop Series _at_ MESA Lab _at_
UCMerced
2
Introduction
  • Why to do some work on stitching?
  • Generate one panoramic image from a series
    of smaller, overlapping images.
  • The stitched image can also be of higher
  • resolutions than a panoramic image acquired
    by
  • a panoramic camera. The other is a
    panoramic
  • camera is more expensive.

3
Introduction
4
Introduction
  • Applications
  • interactive panoramic viewing of
    images,architectural walk-through,multi-node
    movies and other applications associated with
    modelling the 3D environment using images
    acquired from the real world (digital surface
    models/digital terrain models/ true orthophoto
    and full 3D models)

5
Introduction
true orthophoto and full 3D models
6
Introduction
  • What is multi-view?
  • capture images at different time, at
    different view points or using different sensor,
    such as camera, laser scanner, radar,
    multispectral camera..

7
2. Method
The flowchart of producing a panoramic image
8
Introduction
  • The main work is as following
  • 1. image acquired
  • 2.How to perform image effective registration
  • 3. How to fulfill image merging

9
Introduction
  • Image acquisition
  • Use one camera, at different time or different
    viewpoints, to capture images, so there is
    rotation or translation transformation or both of
    them. (R,T)
  • Use several cameras located in different
    viewpoint to capture
  • images. (R,T)
  • Use different sensors, such as camera, laser
    scanner, radar, or
  • multispectral scanner .(fuse multi sensor
    information)

10
Introduction
Geometry of overlapping images
Camera and tripod for acquisition by camera
rotations
Need perform coordination transformation
11
Introduction
  • Since the orientation of each imaging plane
    is different in acquisition method.
    Therefore, images acquired need to be
    projected onto the same surface, such as the
    surface of a cylinder or a sphere, before image
    registration can be performed.
  • that means we have to perform coordinate
    transformation.

12
Introduction
  • Image registration
  • To form a larger image with a set of
    overlapping images, it is necessary to find
    the transformations matrix(usually rigid
    transformation, only parameters R and T) to
    align the images. The process of image
    registration aims to find the transformations
    matrix to align two or more overlapping
    images. Because the projection( from the view
    point through any position in the aligned
    images into the 3D world) is unique.

13
Introduction
5th frame image
6th frame image
Registrated image
14
Multi-view point clouds registration and
stitching based on SIFT feature
  1. Motivation
  2. Method
  3. Experiments
  4. Conclusion
  5. Discussion

SIFT scale invariant feature transform
15
1. Motivation
  • Problems (Existed method for multi-view point
    clouds registration in large scene)
  • Be restricted by the view angle of camera,
    single or two viewpoint image can only obtain
    local information of local scene

16
1. Motivation
  1. Existed methods need to add special markers in
    large reconstruction scenes.
  2. Existed method also need ICP(iterative closest
    point ) iterating calculation, and cant
    eliminate interference of holes and invalid 3D
    point clouds.

17
2. Method
  • Our work based on Bendels8 work, put forward a
    new algorithm of multi-view registration and
    stitching.
  • Generation 2D texture image of effective point
  • clouds
  • we extract SIFT features and match them in
  • 2D effective texture image

18
2. Method
  • 3. Then we map the extracted SIFT features and
    matching relationship into the 3D point clouds
    data and obtain features and matching
    relationship of multi-view 3D point clouds
  • 4. Finally we achieve multi-view point clouds
    stitching.

19
2. Method
  • 2.1 Generation texture image of effective point
  • clouds data
  • Why 3D point clouds cant avoid holes and noise,
    in order to decrease the effect of registration
    and stitching precision about multi-view point
    clouds, we use mutual mapping method between 3D
    point clouds and 2D texture image to obtain
    texture image of effective point clouds data.

20
2. Method
  • 2.1 Generation texture image of effective point
  • clouds data
  • How Firstly, we projected the 3D point clouds to
    2D plane, secondly, used projection binary graph
    to make 8 neighborhood seed filling and area
    filling, so that we can obtain projection graph
    of effective point clouds data.

21
2. Mehtod
2.1 Generation texture image of effective point
clouds data
Texture image of effective point clouds data of a
scene
22
2. Mehtod
  • 2.2 Extraction and matching with SIFT feature
  • Extract SIFT feature
  • SIFT is a local feature which is proposed by
    David7, we can extract the SIFT feature which
    is invariant under translation, scale and
    rotation. This paper used SIFT algorithm to
    extract 2D features, then used RANSAC method9to
    eliminate error matching.

23
2. Mehtod
  • 2. 3 3D feature point extraction
  • Pixel point of effective texture image,
  • which is obtained by point clouds
  • texture mapping has one-to-one
  • correspondence relationship
  • to 3D point clouds10, as shown in
  • Fig.

correspondence relationship
24
2. Method
  • the method is as follows
  • We extract SIFT feature points in 2D texture
    image,
  • then calculated the coordinate of the point.
  • (2) Because 2D feature points and 3D point
    clouds
  • have one-to-one correspondence relationship, we
    can
  • calculate coordinate of corresponding feature
    points of 3D
  • point clouds.

25
2. Mehtod
  • 2.4 3D point clouds stitching
  • The Multi-view 3D point clouds stitching is to
    make coordinate transformation of point clouds in
    different coordinate systems, the main problem is
    to estimate the coordinate transformation R
    (rotation matrix) and T(translation matrix).

26
2. Method
  • According to the matching point pairs which are
    obtained through the above step, we can estimate
    coordinate transformation relationship, that is
    to
  • estimate parameters R and T, which make the
    objective function get minimum

Where pi and qi are matching points pairs of 3D
point clouds in two consecutive viewpoint.
27
3. Experiments
(a) SIFT feature effective texture image 1
(b) effective texture image 2
(c) SIFT feature of 3D cloud points 1
(d) SIFT feature of 3D cloud points 2
28
3. Experiments
Ref 8
Ref 1
Our method
Experimental results
29
3. Experiments
  • Performances evaluation criteria
  • Accuracy registration rate and stitching error
    rate
  • Efficiency time consume

30
3. Experiments
31
(No Transcript)
32
(No Transcript)
33
4. Conclusion
  1. Use SIFT feature of effective texture image to
    achieve registration and stitching of dense
    multi-view point clouds, we obtain texture image
    of effective point clouds through mutual mapping
    between 3D point clouds and 2D texture image,
    this algorithm eliminate interference of holes
    and invalid point clouds

34
4. Conclusion
  1. Ensure that effective 3D feature which
    corresponding to every 2D feature can be found in
    the 3D point clouds, it can eliminate the
    unnecessary error matching, so matching
    efficiency and matching precision have been
    improved

35
3. Our algorithm use the correct matching point
pairs to stitch, so it can avoid stepwise
iterative of ICP algorithm, and decrease
computational complexity of matching, it can also
reduce stitching error which is brought by
error matching.
36
(No Transcript)
37
  • Thanks
Write a Comment
User Comments (0)
About PowerShow.com