Title: ImageSpace Rendering Techniques
1Image-Space Rendering Techniques
- Daniel Baker
- Software Design Engineer
- Direct3D Team
- Microsoft Corporation
2Rendering to Textures
- Once upon a time, was unreliable and slow. Now a
viable technique. - With high precision textures and render targets,
a whole new class of techniques are becoming
available. - Need to stop thinking of the hardware as a simple
rasterizer, and more of a giant DSP.
3Rendering Functions
- Textures dont have to contain color information
at all. - Alpha, for instance, is traditionally used as a
translucency factor. - This is really just another name for a parameter
for some kind of lerp. - What if alpha was a parameter to a different kind
of function?
4Rendering Functions, cont.
- What happens if we render the depth into the
alpha channel. What things can we do? - To render depth, create a vertex shader that
loads the vertex depth from camera and gives it
the pixel shader - The pixel shader then writes it out.
5How to Write Functions Instead of Colors
- dcl_position v0
- //c8 is world-view matrix r0 point in view
space - m4x4 r0,v0,c08
- // scale and bias with some user-defined tweaks
- mad r0.w,r0.w,c12.x,c12.y
- //load the depth into the diffuse color
- mov oD0.xyzw,r0.w
- //transform the point
- m4x4 oPos,v0,c0 //c0 is the world-view-proj
matrix
6Rendering Depth
- The Depth as an alpha for a bevel. Imagine our
alpha is high enough precision not to have
artifacts.
7Per Pixel Depth Operations
- If we render the scene into a texture and then
re-render it again, we can perform more complex
operations. - Example We could lerp each pixel against a fog
color, so pixels which are far away fade away. - This is distance fog.
8Screen Space Pixel Shaders
- Can think of this process as screen space pixel
shaders. - We render the scene into one or more buffers, and
then mix them together to make our complete
scene.
9Example1 Depth of Field
- Many computer images look too real
- A large part of this is because real cameras
cant focus on everything on the scene at once. - Part of the scene is in focus, part isnt.
- Can be used as a cinematographic effect
- How do we emulate this?
10Understanding the Focal Plane
- Only objects near the focal plane are in focus.
The focal plane depends on the lens settings of
the camera.
11Example
- One of these renderings is out of focus, the
other is in focus
12Depth of Field Continued
- Instead of putting the Z value in the alpha, we
put a distance from the focal plane. - This value can be used to blur each pixel.
- Pixels which are out of focus should be blurred,
while pixels which are in focus should remain
relatively untouched.
13Objects in Focus and Out of Focus
Alpha of Original
Blurred Result
Original
- The images on the left are the original. The
center is the alpha map. Black is in focus, white
is out of focus. We can move the focal plane
anywhere we like.
14How Do We Blur
- There are several ways we can blur
- A common method is to render the scene at a lower
resolution, and lerp between the high resolution
and the low resolution version. - This requires an extra pass and causes blocky
artifacts.
15Blurring a Texture
- Blurring is like filtering.
- If want to generate a mip level, we can take the
four pixels from the previously leval and average
them. - For 1.1 pixel shaders, we need to accomplish this
reduction with only 3 tex samples, since well
need the fourth for the high-resolution sample.
16Triangle Filtering
- Can pull 4 samples from the same texture. Try
linear or point sampling - We do this in a triangular pattern, with the
center sample being the high resolution.
2
1
4
3
1/(texture width)
17Doing the Blend
- The alpha value of the image is the out of focus
value which was rendered when we drew our scene. - To blur a pixel, we lerp between the center (high
resolution value) and the 3 offset samples (which
make up the low resolution, blurred value). - Results look compelling, plus we get some
anti-aliasing as an added perk!
18Tweaks
- In practice, there are some artifacts which will
pop up. - If a really in focus object is next to something
which is out of focus, the out of focus will
blend in the in focus object. - This gives an undesired halo effect.
19Tweaks to Depth of Field
- We get a halo around the one object thats
supposed to be in focus!
20Fuzzy Edges
2
A
1
4
3
4
2
3
A
- If the blue pixels are in focus, but the yellow
are out we end up with a fuzzy edge around a
sharp object!
21Edge Detection Via a Moving Kernel
- Since the problem occurs because we mix in-focus
parts of the scene with out-of-focus parts, we
can do a little bit of edge detection. - Dont blend in highly in-focus pixels.
- Multiply each sample times the amount it is
out-of-focus. - Things that are out-of-focus wont get blended in.
22Edges are Preserved
2
A
1
4
3
4
2
3
A
- The Blue is in focus, so it gets multipled by
black. Yellow is out of focus, so it is
multiplied by white.
23The Final Code
- PIXELSHADER Blur
- asm
-
- ps.1.1
-
- tex t0
- tex t1
- tex t2
- tex t3
-
- mul t1.a, t1.a, t0.a
- lrp t1.rgb, t1.a, t1, t0
-
- mul t2.a, t2.a, t0.a
- lrp t2.rgb, t2.a, t2, t0
-
- mul t3.a, t3.a, t0.a
- lrp t3.rgb, t3.a, t3, t0
-
VERTEXSHADER Blur asm vs.1.1
dcl_position v0 dcl_normal v3
dcl_texcoord0 v4 m4x4 r0,v0,c08 m3x3
r1,v3,c08 //Lighing calc here //plane
equation, c2 is our focal plane dp4 r9.w, r0,
c2 mul r9.w, r9.w, c3.x //scale down to
approx. -1 to 1 mul r9.w, r9.w, r9.w min
oD0.w, r9.w, c0.x m4x4 r10,r0,c04
mov oPos, r10 mov oT0, v4
24Taking More Samples
PIXELSHADER ps6 asm ps.1.4 def
c1, 0.66666, 0.5, 0.5, 0.5 def c2, 0.66666,
0.5, 0.5,.66666 def c3, 0.000000,
0.00588235,0,0 def c4, -0.007204,
-0.004159,0,0 def c5, 0.0072043,
-0.004159,0,0 texld r0, t0 texld r1, t1
texld r2, t2 texld r3, t3 texcrd
r5.rgb, t0 add r4.rgb, r5, c5 mul r1.a,
r1.a, r0.a lrp r1.rgb, r1.a, r1, r0 mul
r2.a, r2.a, r0.a lrp r2.rgb, r2.a, r2, r0
mul r3.a, r3.a, r0.a lrp r3.rgb, r3.a, r3,
r0 lrp r1, c1.a, r1, r2 lrp r3, c2.a,
r1, r3 add r1.rgb, r5, c4 add r0.rgb,
r5, c3
- Looks better if we can take more taps
- On Pixel Shader 1.4, we can take 6 taps.
phase texld r2, r1 texld r1, r0
texld r0, t0 texld r4, r4 mul r1.a,
r1.a, r0.a lrp r1.rgb, r1.a, r1, r0 mul
r2.a, r2.a, r0.a lrp r2.rgb, r2.a, r2, r0
mul r4.a, r4.a, r0.a lrp r4.rgb, r4.a, r4,
r0 lrp r1.rgb, c1.a, r1, r2 lrp r0.rgb,
c2.a, r1, r4 mov r0.rgb, r0
25Related Subjects
- Whole family of similar techniques.
- Can use the same moving kernel to perform other
operations. - If you encode the render target with distance,
and normal information, then you can detect edges
by checking for large deltas. - Can be used for non-photo realistic rendering,
like cell-shading.
26Edge Detection
World Space Normals
Edge Detect
Outlines
Eye Space Depth
- Edge Detection, Images courtesy of ATI
Technologies, Inc.
27Edge Detection
- Composite outlines to get a cell-shaded effect.
Images courtesy of ATI
28Volumetric Rendering
- Another technique based on rendering depth into
the image - Wouldnt it be cool if we could render a
semi-translucent volume? - Volumetric fog and lights in most games consist
of alpha blended bitmaps. - But these arent 3D and dont play well with
other 3D geometry.
29Volume Fog
A Fog volume on D3D8 hardware.
30Volume Fog, an Image Based Technique
- Have to make assumptions
- Incoming light at every particle of fog is the
same - Emitted light at every particle of fog is the
same - Intuitively, if we can accumulate the amount of
fog at every pixel on the screen into a texture,
blend the fog in image-space.
31The Fog Model
- Intensity of Pixel LERP(LA, (Ir) ,If)
- L Amount of light absorbed/emitted by fog (fog
density constant) - A Area of fog at pixel.
- Ir Intensity of the light coming from the
scene. - If Intensity of the light coming from the fog.
32A Cube of Fog
- Front side, back side, and the difference
- The Fog density is the difference between the
two. Can add multiple fronts and backs for
concave fog. - Cool! Can we model even-density fog as closed
meshes? Yes!
33The Intuitive Approach
- Render the backside of the fog volume into an
off-screen buffer, encoding each pixels w depth
as its alpha value - Render the front side of the fog volume with a
similar encoding, subtracting this new alpha from
the alpha currently in the off-screen buffer - Use the alpha values in this buffer to blend on a
fog mask
34Coding it up.
- Use a simple vertex shader that outputs w depth.
- As in previous technique
- Render the front side of the fog volume in one
buffer, and the backside (by reversing culling
order) of the fog volume in another buffer - Subtract the two buffers, and the resultant alpha
is the amount of fog at each pixel. Cool!
35Playing nice with geometry.
- Ack! The image on the left doesnt play well with
other geometry. - We need a way to make it look like the image on
the right.
36Playing nice with Geometry
- Need to do an inside outside test.
- If the we are inside the mesh then should
substitute the scene as the backside. - Scene objects will displace fog volumes.
37Adding Noise
- Since Fog Volumes are just 3D models, we can put
textures on them. - These textures can be density modifiers.
- Or, could vary the color of the fog.
38Volume Light
- Similar, but always add light rather then blend
it in. - Basically, just a change in the final blend.
- Could model shafts of light through windows
- But if the shaft got wider at the base, might
need a depth modifier texture to make it thin out.
39Volume Light
- Make an object lit by a light volume.
- Can make convex area lights that work in screen
space. - Instead of subtracting the min and max of the fog
volume, do a compare operator. - If a pixel is in between them, then light the
pixel, otherwise, dont - Doesnt work with concave volumes
40Questions?
- directX_at_microsoft.com
- Check out the appendex slides and the white paper
for more details on volume fog. - White paper on CD discusses how to get around
precision problems in D3D8 class hardware.
41Call to Action
- Investigate image spaced rendering techniques
- Use techniques to add movie-style effects such as
depth of field - Think outside the box.
- Imaged space lighting, image spaced volume
rendering - Try non-photorealistic rendering
42Appendix
- More details on volumetric rendering.
- Consult White Paper and SDK sample for more
information.
43Making Fog Play Nice with others
- We need a way for geometry to intermix with the
fog. - That is, if geometry is in the scene, it should
displace the fog volume. Or, if geometry is in
front of fog, it should block the fog
alltogether. - Could introduce geometry as a negative fog
volume, but geometry is opaque, so any part of
the fog behind it needs to be blocked.
44Making Fog Play Nice with Others
- First things first. If any part of the fog volume
is behind opaque scene geometry, it shouldnt get
rendered. - The Z test can block this out for us. If we
render the scene first, and keep the Z-Buffer
around, occluded parts of the volume wont get
drawn.
45The Harder case
- Before considering how to make geometry displace
fog, need to make sure we can work on concave fog
volumes. - Consider what happens when we have two cubes, one
behind the other. If we render then normally,
only one will show up in our buffers. But we want
to have both emit fog.
46Concave Volumes
B2
A2
B1
A1
Fog Volume
Fog Volume
Camera
- Every pixel on the screen will get hit once by
the first fog volume and once by the second. What
we want is (B1-A1) (B2-A1)
47Adding the fog
- Can refactor to (B1B2) - (A1A2)
- Now, we just change the alpha modes to add. A
pixel will accumulate its total depth. - What about precision? Well get to the later.
- Can use arbitrary fog volumes! Anything an artist
can make, we can render.
48Sum of Averages
B2
A2
B1
A1
Fog Volume
Fog Volume
Camera
- Image Texture 1 will contain A1A2, since these
pixels are on polygons with CCW winding order.
Image Texture 2 will contain B1B2, which are on
polygons with CW winding order. If we subtract 1
from 2, we get the density.
49Arbitrary Fog Volumes
- Now, we can represent any fog volume as a closed
hull. - Can animate, move as we see fit.
- But we still havent made it play nice with other
geometry. - We need objects in the scene to displace fog.
50Object in the Fog
B2
C
A2
B1
A1
Fog Volume
Fog Volume
Object
Camera
- This illustrates why we will have a problem with
the fog. If the Object was drawn, B2 would never
have been added to the backside image buffer,
what we need is for the point C to get rendered
instead.
51Inside Outside Test
- Need a method to detect when the scene has
blocked part of the scene. - This test needs to be per pixel.
- If we did block part of the fog. We need to
substitute the backside of the fog volume with
the object which blocked it, thus displacing the
fog.
52Inside Outside Test
- Can do an Inside/Outside Test
- Each Time we draw a front side of the fog,
increment another channel by 1. - Do the same for the back side.
- If the two are not equal, then an object blocked
the scene. - Add in the scenes depth at those pixels.
53The same illustration.
B2
C
A2
B1
A1
Fog Volume
Fog Volume
Object
Camera
- Two entry points, A1, A2, so entered the fog
twice. But only one exit point gets rendered (b2
is blocked,) so only one out point. Therefore, we
know an object in the scene displaced fog.
54Final problem
- What happens if camera is partially in fog?
- The camera needs to displace fog too!
- Easy hack just force vertices out of range to
be clamped to the cameras front projection. Need
to trivially reject surfaces.