Title: GRADIENT-BASED DEPTH ESTIMATION FROM 4D LIGHT FIELDS
1GRADIENT-BASED DEPTH ESTIMATION FROM 4D LIGHT
FIELDS
Don Dansereau, Len Bruton
Department of Electrical and Computer
Engineering, University of Calgary, Alberta,
Canada
- The Point-Plane Correspondence
- By extension of the plane-line observation, an
infinitesi-mally small element of a Lambertian
surface exists as a plane of constant intensity
in a light field. - The orientation of the plane depends only on the
depth of the surface element in the scene. - Gradient-Based Depth Estimation
- By estimating plane orientations, we can
estimate the depths of the corresponding scene
elements. - A generalized 4D plane has a complicated
orientation, though in the light field it has the
same slope in s,u as in t,v. - The 2D gradient operator applied in s,u and in
t,v in all three color channels yields six
orientation estimates. - Redundancy is consolidated by taking a weighted
sum. Confidence is taken as the magnitude of the
gradient vector.
- Applications
- Computer vision has as its most fundamental
problem depth estimation. Knowing depth, one
might perform tasks as var-ied as robot
navigation, object or face recognition, and scene
modelling. - Summary
- The link between scene shape and plane
orientations in 4D light field models is
demonstrated. - 2D gradient operators are used to estimate plane
orienta-tion and thus scene shape. Redundancy is
consolidated using a weighted sum based on the
confidence of each estimate. - Because of their simplicity, these techniques
are robust and fast, independent of scene
complexity. - The Light Field
- A light field models the light rays permeating a
scene in four dimensions (two for position, two
for direction). - They first came about in the context of
image-based rendering. - Light fields contain a wealth information, and
can there-fore form the basis for robust
processing and analysis tasks.
- Obtaining a Dense Estimate
- Average gradient vector length provides a
measure of confidence in the estimates. - By ignoring estimates of low confidence using a
thres-hold, better results are obtained. - Missing estimates may be filled using region
growing, and the results improved via a simple 4D
moving aver-age filter. - Results
- Gradient-based depth estimation takes 35 ms to
form a single frame on a P4 1.4 GHz. - Speed and performance are independent of scene
complexity.
The point P has a linear ROS in s and u
Real-world scene, thresholded result, and result
with region growing and 4D lowpass filtering
Parameterizing light rays using two planes yields
a 4D data structure
Rendered view of a scene modeled using a light
field
2D gradients estimate plane orientation
Rendered scene and thresholded result