EECS%20274%20Computer%20Vision - PowerPoint PPT Presentation

About This Presentation
Title:

EECS%20274%20Computer%20Vision

Description:

EECS 274 Computer Vision Sources, Shadows, and Shading – PowerPoint PPT presentation

Number of Views:217
Avg rating:3.0/5.0
Slides: 38
Provided by: DA56
Category:

less

Transcript and Presenter's Notes

Title: EECS%20274%20Computer%20Vision


1
EECS 274 Computer Vision
  • Sources, Shadows, and Shading

2
Surface brightness
  • Depends on local surface properties (albedo),
    surface shape (normal), and illumination
  • Shading model a model of how brightness of a
    surface is obtained
  • Can interpret pixel values to reconstruct its
    shape and albedo
  • Reading FP Chapter5, H Chapter 11

3
Radiometric properties of sources
  • How bright (or what color) are objects?
  • One more definition Exitance of a light source
    is
  • the internally generated power (not reflected)
    radiated per unit area on the radiating surface
  • Similar to radiosity a source can have both
  • radiosity, because it reflects
  • exitance, because it emits
  • Independent of its exit angle
  • Internally generated energy radiated per unit
    time, per unit area
  • But what aspects of the incoming radiance will we
    model?
  • Point, line, area source
  • Simple geometry

4
Radiosity due to a point sources
  • small, distant sphere radius e and exitance E,
    which is far away subtends solid angle

Typo in figure, d ? r
5
Radiosity due to a point source
  • As r is increased, the rays leaving the surface
    patch and striking the sphere move closer evenly,
    and the collection changes only slightly, i.e.,
    diffusive reflectance, or albedo
  • Radiosity due to source

6
Nearby point source model
  • The angle term can be written in terms of N and S
  • N is the surface normal
  • ?d is diffuse albedo
  • S is source vector - a vector from P to the
    source, whose length is the intensity term, e2E
  • works because a dot-product is basically a cosine

7
Point source at infinity
  • Issue nearby point source gets bigger if one
    gets closer
  • the sun doesnt for any reasonable binding of
    closer
  • Assume that all points in the model are close to
    each other with respect to the distance to the
    source
  • Then the source vector doesnt vary much, and the
    distance doesnt vary much either, and we can
    roll the constants together to get

8
Line sources
radiosity due to line source varies with inverse
distance, if the source is long enough
9
Area sources
  • Examples diffuser boxes, white walls.
  • The radiosity at a point due to an area source is
    obtained by adding up the contribution over the
    section of view hemisphere subtended by the
    source
  • change variables and add up over the source

10
Radiosity due to an area source
  • ?d is albedo
  • E is exitance
  • r is distance between points Q and P
  • Q is a coordinate on the source

11
Shading models
  • Local shading model
  • Surface has radiosity due only to sources visible
    at each point
  • Advantages
  • often easy to manipulate, expressions easy
  • supports quite simple theories of how shape
    information can be extracted from shading
  • Global shading model
  • Surface radiosity is due to radiance reflected
    from other surfaces as well as from surfaces
  • Advantages
  • usually very accurate
  • Disadvantage
  • extremely difficult to infer anything from
    shading values

12
Local shading models
  • For point sources at infinity
  • For point sources not at infinity

13
Shadows cast by a point source
  • A point that cant see the source is in shadow
    (self cast shadow)
  • For point sources, the geometry is simple

14
Area source shadows
  • Are sources do not produce dark
  • shadows with crisp boundaries
  • Out of shadow
  • Penumbra (almost shadow)
  • Umbra (shadow)

15
Photometric stereo
  • Assume
  • A local shading model
  • A set of point sources that are infinitely
    distant
  • A set of pictures of an object, obtained in
    exactly the same camera/object configuration but
    using different sources
  • A Lambertian object (or the specular component
    has been identified and removed)

16
Monge patch
Projection model for surface recovery - Monge
patch
In computer vision, it is often known as height
map , depth map, or dense depth map
17
Image model
  • For each point source, we know the source vector
    (by assumption)
  • We assume we know the scaling constant of the
    linear camera (i.e., intensity value is linear in
    the surface radiosity)
  • Fold the normal and the reflectance into one
    vector g, and the scaling constant and source
    vector into another Vj
  • Out of shadow
  • g(x,y) describes the surface
  • Vj property of the illumination and of the
    camera
  • In shadow

18
From many views
  • From n sources, for each of which Vi is known
  • For each image point, stack the measurements
  • Solve least squares problem to obtain g

One linear system per point
19
Dealing with shadows
Known
Known
Known
Unknown
20
Recovering normal and reflectance
  • Given sufficient sources, we can solve the
    previous equation (e.g., least squares solution)
    for g(x, y)
  • Recall that g(x, y) r (x,y) N(x, y) , and N(x,
    y) is the unit normal
  • This means that r(x,y) g(x, y)
  • This yields a check
  • If the magnitude of g(x, y) is greater than 1,
    theres a problem
  • And N(x, y) g(x, y) / r(x,y)

21
Five synthetic images
22
Recovered reflectance
g(x,y)?(x,y) the value should be in the range
of 0 and 1
23
Recovered normal field
24
Recovering a surface from normals
  • Recall the surface is written as
  • Parametric surface
  • This means the normal has the form
  • If we write the known vector g as
  • Then we obtain values for the partial derivatives
    of the surface

25
Recovering a surface from normals (contd)
  • Recall that mixed second partials are equal ---
    this gives us a check. We must have
  • (or they should be similar, at least)
  • We can now recover the surface height at any
    point by integration along some path, e.g.

26
Recovered surface by integration
27
The Illumination Cone
What is the set of n-pixel images of an object
under all possible lighting conditions (at fixed
pose)? (Belhuemuer and Kriegman IJCV 99)
Single light source image
N-dimensional Image Space
28
The Illumination Cone
What is the set of n-pixel images of an object
under all possible lighting conditions (but fixed
pose)?
Proposition Due to the superposition of images,
the set of images is a convex polyhedral cone in
the image space.
Illumination Cone
2-light source image
Single light source images Extreme rays of cone
29
Generating the Illumination Cone
  • For Lambertian surfaces, the illumination cone is
    determined by the 3D linear subspace B(x,y),
    where
  • When no shadows, then
  • Use least-squares to find 3D linear subspace,
    subject to the constraint fxyfyx (Georghiades,
    Belhumeur, Kriegman, PAMI, June, 2001)

3D linear subspace
a(x,y) fx(x,y) fy(x,y) albedo
(surface normals)
Surface. f (x,y) (albedo textured mapped on
surface)
Original (Training) Images
30
Image-based rendering Cast Shadows
Single Light Source
Face Movie
31
Yale face database B
  • 10 Individuals
  • 64 Lighting Conditions
  • 9 Poses
  • gt 5,760 Images

Variable lighting
32
Curious experimental fact
  • Prepare two rooms, one with white walls and white
    objects, one with black walls and black objects
  • Illuminate the black room with bright light, the
    white room with dim light
  • People can tell which is which (due to Gilchrist)
  • Why? (a local shading model predicts they cant).

33
Global shading models
Can adjust so that local shading model predicts
these pictures will be indistinguishable
A view of a white room with white objects. We see
a cross-section of the image intensity
corresponding to the line drawn on the image.
A view of a black room with black objects. We see
a cross-section of the image intensity
corresponding to the line drawn on the image.
34
Whats going on here?
  • Local shading model is a poor description of
    physical processes that give rise to images
  • because surfaces reflect light onto one another
  • This is a major nuisance the distribution of
    light (in principle) depends on the configuration
    of every radiator big distant ones are as
    important as small nearby ones (solid angle)
  • The effects are easy to model
  • It appears to be hard to extract information from
    these models

35
Interreflections - a global shading model
  • Other surfaces are now area sources - this
    yields
  • Vis(x, u) is 1 if they can see each other, 0 if
    they cant

36
What do we do about this?
  • Attempt to build approximations
  • Ambient illumination
  • Study qualitative effects
  • reflexes
  • decreased dynamic range
  • smoothing
  • Try to use other information to control errors

37
Ambient illumination
  • Two forms
  • Add a constant to the radiosity at every point in
    the scene to account for brighter shadows than
    predicted by point source model
  • Advantages simple, easily managed (e.g. how
    would you change photometric stereo?)
  • Disadvantages poor approximation (compare black
    and white rooms
  • Add a term at each point that depends on the size
    of the clear viewing hemisphere at each point
  • Advantages appears to be quite a good
    approximation, but jury is out
  • Disadvantages difficult to work with
Write a Comment
User Comments (0)
About PowerShow.com