Title: Shadows
1Shadows
- Dinesh Manocha
- Computer Graphics
- COMP-770 lecture
- Spring 2009
2What are Shadows?
Shad-ow (noun) partial darkness or obscurity
within a part of space from which rays from a
source of light are cut off by an interposed
opaque body
Is this definition sufficient?
3What are Shadows?
- Does the occluder have to be opaque to have a
shadow? - transparency (no scattering)
- translucency (scattering)
- What about indirect light?
- reflection
- atmospheric scattering
- wave properties diffraction
- What about volumetric or atmospheric shadowing?
- changes in density
Is this still a shadow?
4What are Shadows Really?
Volumes of space that receive no light or
lightthat has been attenuated through obscuration
- Is this definition sufficient?
- In practice, too general!
- We need some restrictions
5Common Shadow Algorithm Restrictions
- No transparency or translucency!
- Limited forms can sometimes be handled
efficiently - Backwards ray-tracing has no trouble with these
effects, but it is much more expensive than
typical shadow algorithms - No indirect light!
- More sophisticated global illumination algorithms
handle this at great expense (radiosity,
backwards ray-tracing) - No atmospheric effects (vacuum)!
- No indirect scattering
- No shadowing from density changes
- No wave properties (geometric optics)!
6What Do We Call Shadows?
- Regions not completelyvisible from a light
source - Assumptions
- Single light source
- Finite area light sources
- Opaque objects
- Two parts
- Umbra totally blocked from light
- Penumbra partially obscured
area light source
shadow
umbra
penumbra
7Basic Types of Light Shadows
area, direct indirect
area, direct only
point, direct only
directional, direct only
SOFT SHADOWS
HARD or SHARP SHADOWS
simpler
more realistic
more realistic for small-scale scenes,
directional is realistic for scenes lit by
sunlight in space!
8Goal of Shadow Algorithms
Ideally, for all surfaces, find the fraction of
lightthat is received from a particular light
source
- Shadow computation can be considered a global
illumination problem - this includes ray-tracing and radiosity!
- Most common shadow algorithms are restricted to
direct light and point or directional light
sources - Area light sources are usually approximated by
many point lights or by filtering techniques
9Global Shadow Component inLocal Illumination
Model
Without shadows
With shadows
- Shadowi is the fraction of light received at the
surface - For point lights, 0 (shadowed) or 1 (lit)
- For area lights, value in 0,1
- Ambient term approximates indirect light
10What else does this say?
- Multiple lights are not really difficult
(conceptually) - Complex multi-light effects are many single-light
problems summed together! - Superposition property of illumination model ()
- This works for shadows as well!
- Focus on single-source shadow computation
- Generalization is simple, but efficiency may be
improved
11Characteristics of Shadow Algorithms
- Light-source types
- Directional
- Point
- Area
- Light transfer types
- Direct vs. indirect
- Opaque only
- Transparency / translucency
- Atmospheric effects
- Geometry types
- Polygons
- Higher-order surfaces
12Characteristics of Shadow Algorithms
- Computational precision (like visibility
algorithms) - Object precision (geometry-based, continuous)
- Image precision (image-based, discrete)
- Computational complexity
- Running-time
- Speedups from static viewer, lights, scene
- Amount of user intervention (object sorting)
- Numerical degeneracies
13Characteristics of Shadow Algorithms
- When shadows are computed
- During rendering of fully-lit scene (additive)
- After rendering of fully-lit scene
(subtractive)not correct, but fast and often
good enough - Types of shadow/object interaction
- Between shadow-casting object and receiving
object - Object self-shadowing
- General shadow casting
14Taxonomy of Shadow Algorithms
- Object-based
- Local illumination model (Warnock69,Gouraud71,Pho
ng75) - Area subdivision (Nishita74,Atherton78)
- Planar projection (Blinn88)
- Radiosity (Goral84,Cohen85,Nishita85)
- Lloyd (2004)
- Image-based
- Shadow-maps (Williams78,Hourcade85,Reeves87,
Stamminger/Drettakis02, Lloyd 07) - Projective textures (Segal92)
- Hybrid
- Scan-line approach (Appel68,Bouknight70)
- Ray-tracing (Appel68,Goldstein71,Whitted80,Cook84)
- Backwards ray-tracing (Arvo86)
- Shadow-volumes (Crow77,Bergeron86,Chin89)
15Good Surveys of Shadow Algorithms
- Early complete surveys found in (Crow77 Woo90)
- Recent survey on hard shadows Lloyd 2007 (Ph.D.
thesis) - Recent survey on soft shadows Laine 2007 (Ph.D.
thesis)
16Survey of Shadow Algorithms
- Focus is on the following algorithms
- Local illumination
- Ray-tracing
- Planar projection
- Shadow volumes
- Projective textures
- Shadow-maps
- Will briefly mention
- Scan-line approach
- Area subdivision
- Backwards ray-tracing
- Radiosity
17Local Illumination Shadows
- Backfacing polygons are in shadow (only lit by
ambient) - Point/directional light sources only
- Partial self-shadowing
- like backface culling is a partial visibility
solution - Very fast (often implemented in hardware)
- General surface types in almost any rendering
system!
18Local Illumination Shadows
- Typically, not considered a shadow algorithm
- Just handles shadows of the most restrictive form
- Dramatically improves the look of other
restricted algorithms
19Local Illumination Shadows
- Properties
- Point or directional light sources
- Direct light
- Opaque objects
- All types of geometry (depends on rendering
system) - Object precision
- Fast, local computation (single pass)
- Only handles limited self-shadowing
- convenient since many algorithms do not handle
any self-shadowing - Computed during normal rendering pass
- Simplest algorithm to implement
20Ray-tracing Shadows
- Only interested in shadow-ray tracing (shadow
feelers) - For a point P in space, determine if it is shadow
with respect to a single point light source L by
intersecting line segment PL (shadow feeler) with
the environment - If line segment intersects object, then P is in
shadow, otherwise, point P is illuminated by
light source L
L
shadow feeler(edge PL)
P
21Ray-tracing Shadows
- Arguably, the simplest general algorithm
- Can even handle area light sources
- point-sample area source distributed ray-tracing
(Cook84)
Li
Area light Li
P
P
Shadowi 0
Shadowi 2/5
22Ray-tracing Shadows
Sounds great, whats the problem?
- Slow
- Intersection tests are (relatively) expensive
- May be sped up with standard ray-tracing
acceleration techniques - Shadow feeler may incorrectly intersect object
touching P - Depth bias
- Object tagging
- Dont intersect shadow feeler with object
touching P - Works only for objects not requiring
self-shadowing
23Ray-tracing Shadows
- How do we use the shadow feelers?
- 2 different rendering methods
- Standard ray-casting with shadow feelers
- Hardware Z-buffered rendering with shadow feelers
24Ray-tracing Shadows
Ray-casting with shadow feelers
Light
- For each pixel
- Trace ray from eye through pixel center
- Compute closest object intersection point P along
ray - Calc Shadowi for point by performing shadow
feeler intersection test - Calc illumination at point P
Eye
25Ray-tracing Shadows
Z-buffering with shadow feelers
- Render the scene into the depth-buffer (no need
compute color) - For each pixel, determine if in shadow
- unproject the screen space pixel point to
transform into eye space - Perform shadow feeler test with light in eye
space to compute Shadowi - Store Shadowi for each pixel
- Light the scene using per-pixel Shadowi values
Light
Eye
26Ray-tracing Shadows
- Z-buffering with shadow feelers
How do we use per-pixel Shadowi values to light
the scene?
- Method 1 compute lighting at each pixel in
software - Deferred shading
- Requires object surface info (normal, materials)
- Could use more complex lighting model
27Ray-tracing Shadows
- Z-buffering with shadow feelers
How do we use per-pixel Shadowi values to light
the scene?
- Method 2 use graphics hardware
- For point lights
- Shadowi values either 0 or 1
- Use stencil buffer, stencil values Shadowi
values - Re-render scene with the corresponding light on
using graphics hardware but use stencil test to
only write into lit pixels (stencil1). Should
perform additive blending and ambient-lit scene
should be rendered in depth computation pass. - For area lights
- Shadowi values continuous in 0,1
- Multiple-passes and modulation blending
- Pixel Contribution Ambienti
Shadowi(DiffuseiSpeculari)
28Ray-tracing Shadows
Properties
- Point, directional, and area light sources
- Direct light (may be generalized to indirect)
- Opaque (thin-film transparency easily handled)
- All types of geometry (just need edge
intersection test) - Hybrid object-precision (line intersection),
image-precision for generating pixel rays - Slow, but many acceleration techniques are
available - General shadow algorithm
- Computed during illumination (additive, but
subtractive is possible) - Simple to implement
29Planar Projection Shadows
- Shadows cast by objects onto planar surfaces
- Brute force project shadow casting objects onto
the plane and draw projected object as a shadow
Directional light(parallel projection)
Point light(perspective projection)
30Planar Projection Shadows
- Not sufficient
- co-planar polygons (Z-fighting) depth bias
- requires clipping to relevant portion of plane
shadow receiver stenciling
31Planar Projection Shadowsbetter approach,
subtractive strategy
- Render scene fully lit by single light
- For each planar shadow receiver
- Render receivers stencil pixels covered
- Render projected shadow casters in a shadow color
with depth testing on, depth biasing (offset from
plane), modulation blending, and stenciling (to
write only on receiver and to avoid double pixel
writing) - Receiver stencil value1, only write where
stencil equals 1, change to zero after modulating
pixel
Texture is visible in shadow
32Planar Projection Shadowsproblems with
subtractive strategy
- Called subtractive because it begins with
full-lighting and removes light in shadows
(modulates) - Can be more efficient than additive (avoids
passes) - Not as accurate as additive. Doesnt follow
lighting model - Specular and diffuse components in shadow
- Modulates ambient term
- Shadow color is chosen by user
as opposed to the correct version
33Planar Projection Shadowseven better approach,
additive strategy
- Draw ambient lit shadow receiving scene (global
and all lights local ambient) - For each light sourceFor each planar receiver
- Render receiver stencil pixels covered
- Render projected shadow casters into stenciled
receiver area depth testing on, depth biasing,
stencil pixels covered by shadow - Re-render receivers lit by single light source
(no ambient light) depth-test set to EQUAL,
additive blending, write only into stenciled
areas on receiver and not in shadow - Draw shadow casting scene full-lighting
34Planar Projection ShadowsProperties
- Point or directional light sources
- Direct light
- Opaque objects (could fake transparency using
subtractive) - Polygonal shadow casting objects, planar
receivers - Object precision
- Number of passes Lnum lights, Pnum planar
receivers - subtractive 1 fully lit pass, LP special
passes (no lighting) - additive 1 ambient lit pass, 2LP receiver
passes, LP caster passes
35Planar Projection ShadowsProperties
- Can take advantage of static components
- static objects lights precompute silhouette
polygon from light source - static objects viewer precompute first pass
over entire scene - Visibility from light is handled by user(must
choose casters and receivers) - No self-shadowing (relies on local illumination)
- Both subtractive and additive strategies
presented - Conceptually simple, surprisingly difficult to
get right - gives techniques needed to handle more
sophisticated multi-pass methods
36Shadow VolumesWhat are they?
Volume of space in shadow of a single occluder
with respect to a point light source OR Volume of
space swept out by extruding an occluding polygon
away from a point light source along the
projector rays originating at the point light and
passing through the vertices of the polygon
point light
occluding triangle
3D shadow volume
37Shadow VolumesHow do you use them?
- Parity test to see if a point P on a visible
surface is in shadow - Initialize parity to 0
- Shoot ray from eye to point P
- Each time a shadow-volume boundary is crossed,
invert the parity - if parity0, P is in shadowif parity1, P is lit
- What are some potential problems?
point light
eye
0
occluder
0
0
1
1
0
parity0
parity1
parity0
38Shadow VolumesProblems with Parity Test
Eye inside of shadow volume
Self-shadowing of visible occluders
Multiple overlapping shadow volumes
0
0
0
0
1
0
1
1
0
0
- Incorrectly shadows pts(reversed parity)
- Should a point on the occluder flip the
parity?(consistent if not flipped)
- Point on the occluder should not flip the parity
- Touching boundary is not counted as a crossing
- Incorrectly shadows pts (incorrect parity)
- Is paritys binary condition sufficient?
39Shadow VolumesSolutions to Parity Test Problems
Eye inside of shadow volume
Self-shadowing of visible occluders
Multiple overlapping shadow volumes
0
1
1
1
-1
-1
1
1
0
0
1
1
2
0
- Init parity to be 0 when starting outside and 1
when inside - Do not flip parity when viewing the in-side of
an occluder
- Do not flip parity when viewing out-side of an
occluder either
- Binary parity value is not sufficient, we need a
general counter for boundary crossings 1
entering a shadow volume, -1 exiting
40Shadow VolumesA More General Solution
- Determine if point P is in shadow
- Init boundary crossing counter to number of
shadow volumes containing the eye pointWhy?
Because ray must leave this many shadow volumes
to reach a lit point - Along ray, increment counter each time a shadow
volume is entered, decrement each time one is
exited - If the counter is gt0, P is in shadow
- Special case when P is on an occluder
- Do not increment or decrement counter
- Point on boundary does not count as a crossing
41Shadow VolumesMore Examples
- Can you calculate the final boundary count for
these visible points?
42Shadow VolumesMore Examples
- Can you calculate the final boundary count for
these visible points?
1
0
1
1
1
1
-1
1
1
-1
1
1
-1
-1
0
2
0
0
43Shadow VolumesHow do we use this information to
find shadow pixels?
- Could just use ray-casting (ray through each
pixel) - Too slow, possibly more primitives to intersect
with - Could use silhouette of complex objects to
simplify shadow volumes
0
0
1
-
-
1
-
0
-
-
-
-
-
0
-
-
1
2
0
1
44Shadow VolumesUsing Standard Graphics Hardware
- Simple observations
- For convex occluders, shadows volumes form convex
shape. - Enter through front-facing shadow-volume
boundariesExit through back-facing
0
0
1
-
-
1
-
0
-
-
-
-
-
0
-
-
45Shadow VolumesUsing Standard Graphics Hardware
- Use standard Z-buffered rendering and the stencil
buffer (8 bits) to calculate boundary count for
each pixel - Create shadow volumes for each occluding object
(should be convex) - Render the ambient lit scene, keep the depth
values - For each light source
- Initialize stencil values to number of volumes
containing the eye point - Still using the Z-buffer depth test (strictly
less-than), but no depth update - Render the front-facing shadow-volume boundary
polygons, increment stencil values for all pixels
covered by the polygons that pass the depth test - Render the back-facing boundary polygons, but
decrement the stencil. - Pixels with stencil value of zero are lit,
re-render the scene with lighting on (no ambient,
depth-test should be set to equal).
46Shadow VolumesUsing Standard Graphics Hardware
step-by-step
- Create shadow volumes
- Initialize stencil buffer valuesto of volumes
containing eye
per-pixel stencil values initially 0
47Shadow VolumesUsing Standard Graphics Hardware
step-by-step
- Render the ambient lit scene
- Store the Z-buffer
- Set depth-test to strictly less-than
48Shadow VolumesUsing Standard Graphics Hardware
step-by-step
- Render front-facing shadow-volume boundary
polygons - Why front faces first? Unsigned stencil values
- Increment stencil values for pixels covered that
pass depth-test
49Shadow VolumesUsing Standard Graphics Hardware
step-by-step
- Render back-facing shadow-volume boundary
polygons - Decrement stencil values for pixels covered that
pass depth-test
50Shadow VolumesUsing Standard Graphics Hardware
step-by-step
- Pixels with stencil value of zero are lit
- Set depth-test to strictly equals
- Re-render lit scene with no ambient into lit
pixels
51Shadow VolumesMore Potential Problems
- Lots o geometry!
- Only create on shadow-casting objects
(approximation) - Use only silhouettes
- Lots o fill!
- Reduce geometry
- Have a good max distance
- Clip to view-volume
- Near-plane clipping
52Shadow VolumesProperties
- Point or directional light sources
- Direct light
- Opaque objects (could fake transparency using
subtractive) - Restricted to polygonal objects (could be
generalized) - Hybrid object precision in creation of
shadow-volumes, image-precision per-pixel stencil
evaluation - Number of passes Lnum lights, Nnumber of tris
- additive 1 ambient lit, 3NL shadow-volume, 1
fully lit - subtractive 1 fully lit, 3NL shadow-volume, 1
image pass (modulation) - Could be made faster by silhouette
simplification, and by hand-picking shadow
casters and receivers
53Shadow VolumesProperties
- Can take advantage of static components
- static objects lights precompute shadow
volumes from light sources - static objects viewer precompute first pass
over entire scene - General shadow algorithm, but could be restricted
for more speed - Both subtractive and additive strategies presented
54Projective Texture ShadowsWhat are Projective
Textures?
- Texture-maps that are mapped to a surface through
a projective transformation of the vertices into
the textures camera space
55Projective Texture ShadowsHow do we use them to
create shadows?
- Project a modulation image of the shadow casting
objects from the lights point-of-view onto the
shadow receiving objects
Lights point-of-view
Shadow projective texture (modulation image or
light-map)
Eyes point-of-view, projective texture applied
to ground-plane(self-shadowing is from another
algorithm)
56Projective Texture ShadowsMore details
- Fast, subtractive method
- For each light source
- Create a light camera that encloses shadowed area
- Render shadow casting objects into lights view
- only need to create a light map (1 in light, 0 in
shadow) - Create projective texture from lights view
- Render fully-lit shadow receiving objects with
applied modulation projective-textures (need
additive blending for all light sources except
first one) - Render fully-lit shadow casting objects
57Projective Texture ShadowsMore examples
Cast shadows from complex objects onto complex
objects in only 2 passes over shadow casters and
1 pass over receivers (for 1 light)
Lighting for shadowed objects are computed
independently for each light source and summed
into a final image
Colored light sources. Lit areas are modulated by
value of 1 and shadow areas can be any ambient
modulation color
58Projective Texture ShadowsProblems
- Does not use visibility information from the
lights view - Objects must be depth-sorted
- Parts of an object that are not visible from the
light also have the projective texture applied
(ambient light appears darker on shadows
receiving objects) - Receiving objects may already be textured
- Typically, only one texture can be applied to an
object at a time
59Projective Texture ShadowsSolutions well, sort
of...
- Does not use visibility information from the
lights view - User selects shadow casters and receivers
- Casters can be receivers, receivers can be
casters - Must create and apply projective textures in
front-to-back order from the light - Darker ambient lighting is accepted. Finding
these regions requires a more general shadow
algorithm - Receiving objects may already be textured
- Use two passes first to apply base texture,
second apply projective texture with modulation
blending - Use multi-texture this is what it is for! Avoids
passes over the geometry!
60Projective Texture ShadowsProperties
- Point or directional light sources
- Direct light (fake transparency, with different
modulation colors) - All types of geometry (depends on the rendering
system) - Image precision (image-based)
- For each light, 2 passes over shadow-casting
objects (1 to create modulation image, 1 with
full lighting), 1 pass over shadow receiving
object (fully-lit w/ projective texture) - More passes will be required for shadow-casting
objects that are already textured - Benefits mostly from static scene (precompute
shadow textures) - User must partition objects into casters and
receivers (casters could be receivers and vice
versa)
61Projective Texture ShadowsHow do we apply
projective textures?
- All points on the textured surface must be mapped
into the textures camera space (projective
transformation) - Position on textures camera viewplane window
maps into the 2D texture-map
How can this be done efficiently?Slight
modification to perspectively-correct
texture-mapping
62Projective Texture ShadowsPerspectively-incorrect
Texture-mapping
- Relies on interpolating screen-space values along
projected edge - Vertices after perspective transformation and
perspective divide(x,y,z,w)?(x/w,y/w,z/w,1)
63Projective Texture ShadowsPerspectively-correct
Texture-mapping
- Add 3D homogeneous coordinate to texture-coords
(s,t,1) - Divide all vertex components by w after
perspective transformation - Interpolate all values, including 1/w
- Obtain perspectively-correct texture-coords
(s,t) by applying another homogeneous
normalization (divide interpolated s/w and t/w
terms by interpolated 1/w term)
Final perspectively-correct values, by
normalizing homogeneous texture-coords
64Projective Texture ShadowsProjective
Texture-mapping
- Texture-coords become 4D just like vertex
coords(x,y,z,w)?(s,t,r,q) - Full 4x4 matrix transformation is applied to
texture-coords - Projective transformations also allowed, another
perspective divide is needed for texture-coords - Vertices homogeneous space to screen-space
- (x,y,z,w)?(x/w,y/w,z/w)
- Texture-coords homogeneous space to
texture-space - (s,t,r,q) ?(s/q,t/q,r/q)
- Requires another per-vertex transformation, but
per-pixel work is same as in perspectively-correct
texture-mapping (Segal92)
65Projective Texture ShadowsProjective
Texture-mapping
- Given vertex v, corresponding texture-coords t,
and two 4x4 matrix transformations M and T (M
composite modeling, viewing, and projection
transformations, and T texture-coords
transformation matrix) - Each vertex represented as Mv, Tt x y z
w s t r q - Transformed into screen space through a
perspective divide of all components by w - x y z w s t r q ? x/w y/w z/w s/w t/w r/w
q/w - All values are linearly interpolated along edge
(across polygon face) - Perform per-pixel homogeneous normalization of
texture-coords by dividing interpolated q/w value - x y z s t r x/w y/w z/w (s/w)/(q/w)
(t/w)/(q/w) (r/w)/(q/w) - Same as perspectively-correct texture-mapping,
but instead of dividing by interpolated 1/w,
divide by interpolated q/w (Segal92)
66Projective Texture ShadowsProjective
Texture-mapping
Final perspectively-correct values, by
normalizing homogeneous texture-coords
67Projective Texture ShadowsProjective
Texture-mapping
- So how do we actually use this to apply the
shadow texture? - Use the vertexs original coords as the
texture-coords - Texture transformationT LightProjectionLightV
iewing NormalModeling
68Shadow-Mapsfor accelerating ray-traced shadow
feelers
Light
- Previously, shadow feelers had to be intersected
against all objects in the scene - What if we knew the nearest intersection point
for all rays leaving the light? - The depth-buffer of the rendered scene from a
camera at the light would give us a discretized
version of this - This depth-buffer is called a shadow-map
- Instead of intersecting rays with objects, we
intersect the ray with the light viewplane, and
lookup up the nearest depth value. - If the lights depth value at this point is less
than the depth to the eye-ray nearest
intersection point, then this point is in shadow!
Light-ray nearest intersection point
Eye
L
E
Eye-ray nearest intersection point
If L is closer to the light than E, then E is in
shadow
69Shadow-Mapsfor accelerating ray-traced shadow
feelers
- Cool, we can really speed up ray-traced shadows
now! - Render from eye view to accelerate first-hit
ray-casting - Render from light view to store first-hits from
light - For each pixel-ray in the eyes view, we can
project the first hit point into the lights view
and check if anything is intersecting the shadow
feeler with a simple table lookup! - The shadow-map is discretized, but we can just
use the nearest value. - What are the potential problems?
70Shadow-MapsProblems with Ray-traced Shadow Maps
- Still too slow
- requires many per-pixel operations
- does not take advantage of pixel coherence in eye
view - Still has self-shadowing problem
- need a depth bias
- Discretization error
- Using the nearest depth value to the projected
point, may not be sufficient - How can we filter the depth-values? The standard
way does not really make sense here.
71Shadow-Mapsfaster way standard shadow-map
approach
- Not normally used as a ray-tracing acceleration
technique, normally used in a standard Z-buffered
graphics system - Two methods presented (Williams78)
- Subtractive post-processing on final lit image
(like full-scene image warping) - Additive as implemented in graphics hardware
(OpenGL extension on InfiniteReality)
72Shadow-Mapsillustration of basic idea
Shadow-map from light 1
Shadow-map from light 2
Final view
73Shadow-MapsSubtractive
- Render fully-lit scene
- Create shadow-map render depth from lights view
- For each pixel in final image
- Project point at each pixel from eye screen-space
into light screen-space (keep eye-point depth De) - Look up light depth value Dl
- Compare depth values, if DlltDe eye-point is in
shadow - Modulate, if point is in shadow
74Shadow-MapsSubtractive advantages
- Constant time shadow computation!
- just like full-scene image-warping eye view
pixels are warped to light view and then a depth
comparison is performed - Only a 2-pass algorithm
- 1 eye pass, 1 light pass (and 1 constant time
image-warping pass) - Deferred shading (for shadow computation)
Zhang98 presents a similar approach using a
forward-mapping (from light to eye, reverses this
whole process)
75Shadow-MapsSubtractive disadvantages
- Not as accurate as additive (same reasons)
- Specular and diffuse components in shadow
- Modulates ambient term
- Has standard shadow-map problems
- Self-shadowing depth-bias needed
- Depth sampling error how do we accurately
reconstruct depth values from a point-sampling?
76Shadow-MapsAdditive
- Create shadow-map render depth from lights view
- Use shadow-map as a projective texture!
- While scan-converting triangles
- apply shadow-map projective texture
- instead of modulating with looked-up depth value
Dl, compare the value against the r-value (De) of
the transformed point on the triangle - Compare De to Dl , if DlltDe eye-point is in shadow
Basically, scan-converting triangle in both eye
and light spaces simultaneously and performing a
depth comparison in light space against
previously stored depth values
77Shadow-MapsAdditive advantages
- Easily implemented in hardware
- only a slight change to the standard
perspectively-correct texture-mapping hardware
add an r-component compare op - Fastest, most general implementation to date!
- As fast as projective textures, but general!
78Shadow-MapsAdditive disadvantages
- Computes shadows on a per-primitive basis
- All pixels covered by all primitives must go
through shadowing and lighting operation whether
visible or not (no deferred shading) - Still has standard shadow-mapping problems
- Self-shadowing
- Depth sampling error
79Shadow-MapsSolving main problems self-shadowing
- Use a depth bias during the transformation into
light space - Add a z translation towards the light source
after transformation from eye to light - OR
- Add z-translation towards eye before transforming
into light space - OR
- Translate eye-space point along surface normal
before transforming into light space
80Shadow-Maps
- Solving main problems depth sampling
- Could just use the nearest sample, but how would
you anti-alias depth?
81Shadow-MapsDepth sampling normal filtering
- Averaging depth doesnt really make sense
(unrelated to surface, especially at shadow
boundaries!) - Still a binary result, (no anti-aliased softer
shadows)
82Shadow-MapsDepth sampling percentage closer
filtering (Reeves87)
- Could average binary results of all depth map
pixels covered - Soft anti-aliased shadows
- Very similar to point-sampling across an area
light source in ray-traced shadow computation
83Shadow-MapsHow do you choose the samples?
Quadrilateral represents the area covered by a
pixels projection onto a polygon after being
projected into the shadow-map
84Scanline Algorithmsclassic by Bouknight and
Kelley
- Project edges of shadow casting triangles onto
receivers - Use shadow-volume-like parity test during
scanline rasterization
85Area-Subdivision Algorithmsbased on
Atherton-Weiler clipping
- Find actual visible polygon fragments
(geometrically) through generalized clipping
algorithm - Create model composed of shadowed and lit
polygons - Render as surface detail polygons
86Area-Subdivision Algorithmsbased on
Atherton-Weiler clipping
87Multiple Light Sourcesfor any single-light
algorithm
- Accumulate all fully-lit single-light images into
a single image through a summing blend op
(standard accumulation buffer or blending
operations) - Global ambient lit scene should be added in
separately - Very easy to implement
- Could be inefficient for some algorithms
- Use higher accuracy of accumulation buffer
(usually 12-bit per color component)
88Area light Sourcesfor any point-light algorithm
- Soft or fuzzy shadows (penumbra)
- Some algorithms have some natural support for
these - For restricted algorithms, we can always sample
the area light source with many point light
sources jitter and accumulate - Very expensive many high quality passes to
obtain something fuzzy - Not really feasible in most interactive
applications - Convolution and image -based methods are usually
more efficient here
89Backwards Ray-tracing
90Radiosity
91References
- Appel A. Some Techniques for Shading Machine
Renderings of Solids, Proc AFIPS JSCC, Vol 32,
1968, pgs 37-45. - Arvo, J. Backward Ray Tracing, in A.H. Barr,
ed., Developments in Ray 8-Tracing, Course Notes
12 for SIGGRAPH 86, Dallas, TX, August 18-22,
1986. - Atherton, P.R., Weiler, K., and Greenberg, D.
Polygon Shadow Generation, SIGGRAPH 78, pgs
275-281. - Bergeron, P. A General Version of Crows Shadow
Volumes, CG A, 6(9), September 1986, pgs
17-28. - Blinn, Jim. Jim Blinns Corner Me and My (Fake)
Shadow, IEEE CGA, vol 8, no 1, Jan 1988, pgs
82-86. - Bouknight, W.J. A Procedure for Generation of
Three-Dimentional Half-Toned Computer Graphics
Presentations, CACM, 13(9), September 1970, pgs
527-536. Also in FREE80, pgs 292-301. - Bouknight, W.J. and Kelly, K.C. An Algorithm for
Producing Half-Tone Computer Graphics
Presentations with Shadows and Movable Light
Sources, SJCC, AFIPS Press, Montvale, NJ, 1970,
pgs 1-10. - Chin, N., and Feiner, S. Near Real-Time Shadow
Generation Using BSP Trees, SIGGRAPH 89, pgs
99-106.
92References
- Cohen, M.F., and Greenberg, D.P. The Hemi-Cube
A Radiosity Solution for Complex
Environments,SIGGRAPH 85, pgs 31-40. - Cook, R.L. Shade Trees, SIGGRAPH 84, pgs
223-231. - Cook, R.L., Porter, T., and Carpenter, L.
Distributed Ray Tracing, SIGGRAPH 84, pgs
127-145. - Crow, Frank. Shadow Algorithms for Computer
Graphics, SIGGRAPH 77. - Goldstein, R.A.and Nagel, R. 3-D Visual
Simulation, Simulation, 16(1), January 1971, pgs
25-31. - Goral, C.M., Torrance, K.E., Greenberg, D.P., and
Gattaile, B. Modeling the Interaction of Light
Between Diffuse Surfaces, SIGGRAPH 84 pgs
213-222. - Gouraud, H. Continuous Shading of Curved
Surfaces, IEEE Trans. On Computers, C-20(6),
June 1971, 623-629. Also in FREE80, pgs 302-308. - Hourcade, J.C. and Nicolas, A. Algorithms for
Antialiased Cast Shadows, Computers Grahpics
9, 3 (1985), pgs 259-265. - Nishita, T. and Nakamae, E. An Algorithm for
Half-Tone Representation of Three-Dimensional
Objects, Information Processing in Japan, Vol.
14, 1974, pgs 93-99. - Nishita, T., and Nakamae, E. Continuous Tone
Representation of Three-Dimensional Objects
Taking Account of Shadows and Interreflection,
SIGGRAPH 85, pgs 23-30.
93References
- Reeves, W.T., Salesin, D.H., and Cook, R.L.
Rendering Antialiased Shadows with Depth Maps,
SIGGRAPH 87, pgs 283-291. - Segal, M., Korobkin, C., van Widenfelt, R.,
Foran, J., and Haeberli, P. Fast Shadows and
Lighting Effects Using Texture Mapping, Computer
Graphics, 26, 2, July 1992, pgs 249-252. - Warnock, J. A Hidden-Surface Algorithm for
Computer Generated Half-Tone Pictures, Technical
Report TR 4-15, NTIS AD-753 671, Computer Science
Department, University of Utah, Salt Lake City,
UT, June 1969. - Whitted, T. An Improved Illumination Model for
Shaded Display, CACM, 23(6), June 1980, pgs
343-349. - Williams, L. Casting Curved Shadows on Curved
Surfaces, SIGGRAPH 78, pgs 270-274. - Woo, Andrew, Pierre Poulin, and Alain Fournier.
A Survey of Shadow Algorithms, IEEE CGA, Nov
1990, pgs 13-32. - Zhang, H. Forward Shadow Mapping, Rendering
Techniques 98, Proceedings of the 9th
Eurographics Rendering Workshop.
94Acknowledgements
- Mark Kilgard (nVidia) for various pictures from
presentation slides (www.opengl.org) - Advanced OpenGL Rendering course notes
(www.opengl.org)