Introduction to Robot Vision Ziv Yaniv Computer Aided - PowerPoint PPT Presentation

1 / 28
About This Presentation
Title:

Introduction to Robot Vision Ziv Yaniv Computer Aided

Description:

Introduction to Robot Vision Ziv Yaniv Computer Aided Interventions and Medical Robotics, Georgetown University Vision The special sense by which the qualities of an ... – PowerPoint PPT presentation

Number of Views:81
Avg rating:3.0/5.0
Slides: 29
Provided by: isiswikiG
Category:

less

Transcript and Presenter's Notes

Title: Introduction to Robot Vision Ziv Yaniv Computer Aided


1
Introduction to Robot Vision
Ziv YanivComputer Aided Interventions and
Medical Robotics, Georgetown University
2
Vision
  • The special sense by which the qualities of an
    object (as color, luminosity, shape, and size)
    constituting its appearance are perceived through
    a process in which light rays entering the eye
    are transformed by the retina into electrical
    signals that are transmitted to the brain via the
    optic nerve.
  • Miriam Webster
    dictionary

3
The Sensor
endoscope
Single Lens Reflex (SLR) Camera
webcam
C-arm X-ray
4
The Sensor
Model Pin-hole Camera, Perspective Projection
5
Machine Vision
Goal Obtain useful information about the 3D
world from 2D images.
Model
Regions Textures Corners Lines
3D Geometry Object identification Activity
detection
images
actions
6
Machine Vision
Goal Obtain useful information about the 3D
world from 2D images.
  • Low level (image processing)
  • image filtering (smoothing, histogram
    modification),
  • feature extraction (corner detection, edge
    detection,)
  • stereo vision
  • shape from X (shading, motion,)
  • High level (machine learning/pattern
    recognition)
  • object detection
  • object recognition
  • clustering

7
Machine Vision
  • How hard can it be?

8
Machine Vision
  • How hard can it be?

9
Robot Vision
  • Simultaneous Localization and Mapping (SLAM)
  • Visual Servoing.

10
Robot Vision
  • Simultaneous Localization and Mapping (SLAM)
    create a 3D map of the world and localize within
    this map.

NASA stereo vision image processing, as used by
the MER Mars rovers
11
Robot Vision
  • Simultaneous Localization and Mapping (SLAM)
    create a 3D map of the world and localize within
    this map.

Simultaneous Localization and Mapping with
Active Stereo Vision, J. Diebel, K. Reuterswärd,
S. Thrun, J. Davis, R. Gupta, IROS 2004.
12
Robot Vision
  • Visual Servoing Using visual feedback to
    control a robot
  • image-based systems desired motion directly from
    image.

An image-based visual servoing scheme for
following paths with nonholonomic mobile
robots A. Cherubini, F. Chaumette, G.
Oriolo, ICARCV 2008.
13
Robot Vision
  • Visual Servoing Using visual feedback to
    control a robot
  • Position-based systems desired motion from 3D
    reconstruction estimated from image.

14
System Configuration
  • Difficulty of similar tasks in different settings
    varies widely
  • How many cameras?
  • Are the cameras calibrated?
  • What is the camera-robot configuration?
  • Is the system calibrated (hand-eye calibration)?
  • Common configurations

15
System Characteristics
  • The greater the control over the system
    configuration and environment the easier it is to
    execute a task.
  • System accuracy is directly dependent upon model
    accuracy what accuracy does the task require?.
  • All measurements and derived quantitative values
    have an associated error.

16
Stereo Reconstruction
  • Compute the 3D location of a point in the stereo
    rigs coordinate system
  • Rigid transformation between the two cameras is
    known.
  • Cameras are calibrated given a point in the
    world coordinate system we
    know how to map it to the
    image.
  • Same point localized in the two images.

17
Commercial Stereo Vision
Polaris Vicra infra-red system(Northern Digitial
Inc.)
MicronTracker visible light system (Claron
Technology Inc.)
18
Commercial Stereo Vision
Images acquired by the Polaris Vicra infra-red
stereo system
right image
left image
19
Stereo Reconstruction
  • Wide or short baseline reconstruction accuracy
    vs. difficulty of point matching

20
Camera Model
  • Points P, p, and O, given in the camera
    coordinate system, are collinear.

There is a number a for which O aP p
aP p
a f/Z
, therefore
21
Camera Model
  • Transform the pixel coordinates from the camera
    coordinate system to the image coordinate
    system
  • Image origin (principle point) is at x0,y0
    relative to the camera coordinate system.
  • Need to change from metric units to pixels,
    scaling factors kx, ky.
  • Finally, the image coordinate system may be
    skewed resulting in

22
Camera Model
  • As our original assumption was that points are
    given in the camera coordinate system, a complete
    projection matrix is of the form

C camera origin in the world coordinate system.
  • How many degrees of freedom does M have?

23
Camera Calibration
  • Given pairs of points, piTx,y,w,
    PiTX,Y,Z,W, in homogenous coordinates we have

image coordinate system
z
x
calibration object/ world coordinate
system
y
principle point
Our goal is to estimate M
y
z
x
camera coordinate system
  • As the points are in homogenous coordinates the
    vectors p and MP are not necessarily equal, they
    have the same direction but may differ by a
    non-zero scale factor.

24
Camera Calibration
  • After a bit of algebra we have
  • The three equations are linearly dependent
  • Each point pair contributes two equations.
  • Exact solution M has 11 degrees of freedom,
    requiring a minimum of n6 pairs.
  • Least squares solution For ngt6 minimize Am
    s.t. m1.

25
Obtaining the Rays
  • Camera location in the calibration objects
    coordinate system, C, is given by the one
    dimensional right null space of the matrix M
    (MC0).
  • A 3D homogenous point P Mp is on the ray
    defined by p and the camera center it projects
    onto p, MMp Ipp.
  • These two points define our ray in the world
    coordinate system.
  • As both cameras were calibrated with respect to
    the same coordinate system the rays will be in
    the same system too.

26
Intersecting the Rays
27
World vs. Model
  • Actual cameras most often dont follow the ideal
    pin-hole model, usually exhibitsome form of
    distortion (barrel, pin-cushion, S).
  • Sometimes the world changes to fit your model,
    improvements in camera/lens quality can
    improve model performance.

old image-Intensifier x-raypin-holedistortion
replaced by flat panel x-ray pin-hole
28
Additional Material
  • Code
  • Camera calibration toolbox for matlab (Jean-Yves
    Bouguet ) http//www.vision.caltech.edu/bouguetj/c
    alib_doc/
  • Machine Vision
  • Multiple View Geometry in Computer Vision,
    Hartley and Zisserman, Cambridge University
    Press.
  • "Machine Vision", Jain, Kasturi, Schunck,
    McGraw-Hill.
  • Robot Vision
  • Simultaneous Localization and Mapping Part I,
    H. Durant-Whyte, T. Bailey, IEEE Robotics and
    Automation Magazine, Vol. 13(2), pp. 99-110,
    2006.
  • Simultaneous Localization and Mapping (SLAM)
    Part II,T. Bailey, H. Durant-Whyte, IEEE
    Robotics and Automation Magazine, Vol. 13(3), pp.
    108-117, 2006.
  • Visual Servo Control Part I Basic Approaches,
    IEEE Robotics and Automation Magazine, Vol.
    13(4), 82-90, 2006.
  • Visual Servo Control Part II Advanced
    Approaches, IEEE Robotics and Automation
    Magazine, Vol. 14(1), 109-118, 2007.
Write a Comment
User Comments (0)
About PowerShow.com