Title: Rendering hair with graphics hardware
1Rendering hair with graphics hardware
- Tae-Yong Kim
- Rhythm Hues Studios
- tae_at_rhythm.com
2Overview
- State of the Art in Hair Rendering and Modeling
- Issues in Hardware Hair Rendering
- Line drawing with Graphics Hardware
- Illumination and Hardware Shader
- Shadows
3Hair representation
- Hair is not a single entity always consider
hair volume a a whole - Surface (polygons, nurbs, etc.)
- Line (guide hair, curve, polyline..)
- Volumetric representation (implicit surface, 3D
texture, tubes, )
4Representation - Polygons
- Real-time rendering
- Suited for existing artists asset
- Limited hairstyle
- Animation difficult
ATI2004 - Polygonal Model (from SIGGRAPH Sketch
by Thorsten Scheuermann)
5Representation - Lines
- Dynamic hair
- No restriction in hairstyle
- Advanced rendering possible
- No standard modeling technique
- Can be time consuming to render
nVidia Line representation (from Matthias
Wlokas Eurographics 2004 tutorial)
6Model Generation
- Use control tools to generate bunch of hairs
- Control curves (guide hairs)
- Cylinders single level/multi level
- Fluid flow, 3D vector field
- Automated methods emerging
7Guide hair
8Other guide objects (fluid)
Hadap and Nadia M. Thalmann Eurographics 2001
9Other guide objects (cylinders)
Kim and Neumann SIGGRAPH 2002
10Emerging techniques
S. Paris, H. Briceno, F. Sillion SIGGRAPH 2004
11Hair as line drawing
12Hair segment as a GL line
DrawHairLine(ps, pe , cs, ce) glBegin(GL_LINE
S) glColor3fv(cs) glVertex3fv(ps) glColor3
fv(ce) glVertex3fv(pe) glEnd()
13Hair as GL lines
DrawHair(p0,p1,..,pn-1,c0,c1,.cn-1) glBegin(
GL_LINE_STRIP) glColor3fv(c0) glVertex3fv(p0)
glColor3fv(c1) glVertex3fv(p1)
glColor3fv(cn-1) glVertex(pn-1) glEnd()
14Are we done?
DrawHair(p0,p1,..,pn-1,c0,c1,.cn-1) glBegin(
GL_LINE_STRIP) glColor3fv(c0) glVertex3fv(p0)
glColor3fv(c1) glVertex3fv(p1)
glColor3fv(cn-1) glVertex(pn-1) glEnd()
Algorithm1 For each hair, call this function!
15Not quite..
16Point samples
True sample
Computed sample color
17(No Transcript)
18- Increase sampling rate ( large image size,
accumulation buffer) - Image quality depends on number of samples at a
slow convergence rate - Required sampling rate is above 10,000 x 10,000
pixel resolution - Thinner (smaller) hairs require even higher
sampling rate
19- Hardware antialiasing of line drawing
- GL_LINE_SMOOTH
- Thickness control with alpha blending
20? 0.09 0.25 0.60
1.0
21Correct
22Hair as visibility ordered lines
Algorithm 2 1. Compute color for each end point
of the lines (shading, shadowing) 2. Compute the
visibility order 3. Draw lines sorted by the
visibility order
23a
o
b
r
k
v
c
e
j
p
g
d,e
s
m
d
c
i
f,g,h
u
l
q
h
n
b
a
f
i,j
t
k,l,m
n,o,p,q
r,s,t
u,v
Drawing order a, b, c, d, e, f, g, h, I, j, k,
l, m, n, o, p, q, r, s, t, u, v
24(No Transcript)
25- Visibility ordering can be cached and reused
(useful for interactive application)
26- Describes the amount of reflected/scattered light
toward the viewing direction - Kajiya-Kay (1989)
- Marschner et al. (2003)
27V
L
T
28H
V
L
T
29(No Transcript)
30(No Transcript)
31(No Transcript)
32(No Transcript)
33(No Transcript)
34Line table
Vertex table
35Self-shadows
- A shading model describes the amount of
reflected light when hair is fully lit - Most hair receives attenuated light due to
self-shadowing - Crucial for depicting volumetric structure for
hair
36(No Transcript)
37(No Transcript)
38- Shadow is a fractional visibility function
- ?How many hairs between me and the light?
- ?What percentage of light is blocked?
39- Shadow is a fractional visibility function
40- Opacity shadow maps
- Kim and Neumann EGRW 2001
- Fast approximation of the deep shadowing function
- Idea use graphics hardware as much as possible
41Opacity
O(l)
Monotonically increasing
l
42(No Transcript)
43 for (1 ? i ? N) Determine the opacity
maps depth Di from the light for each shadow
sample point pj in P (1 ? j ? M) Find i
such that Di-1? Depth(pj) lt Di Add the
point pj to Pi. Clear the alpha
buffer and the opacity maps Bprev, Bcurrent.
for (1 ? i ? N) Swap Bprev and
Bcurrent. Render the scene clipping
it with Di-1 and Di. Read back the alpha
buffer to Bcurrent. for each shadow sample
point pk in Pi O prev sample(Bprev ,
pk) O current sample(Bcurrent ,
pk) O interpolate (Depth(pk),
Di-1, Di, O prev, O current) t(pk)
e-?O F(pk) 1.0 - t(pk)
44Uniform slicing
1D BSP
Nonlinear spacing
45 for (1 ? i ? N) Determine the opacity
maps depth Di from the light for each shadow
sample point pj in P (1 ? j ? M) Find i
such that Di-1? Depth(pj) lt Di Add the
point pj to Pi. Clear the alpha
buffer and the opacity maps Bprev, Bcurrent.
for (1 ? i ? N) Swap Bprev and
Bcurrent. Render the scene clipping
it with Di-1 and Di. Read back the alpha
buffer to Bcurrent. for each shadow sample
point pk in Pi O prev sample(Bprev ,
pk) O current sample(Bcurrent ,
pk) O interpolate (Depth(pk),
Di-1, Di, O prev, O current) t(pk)
e-?O F(pk) 1.0 - t(pk)
46- Storing all the opacity maps incur high memory
usage - Sort shadow computation points based on the maps
depth
P0
Pi
PN-1
- As soon as the current map is rendered, compute
shadows for corresponding sample points.
47 for (1 ? i ? N) Determine the opacity
maps depth Di from the light for each shadow
sample point pj in P (1 ? j ? M) Find i
such that Di-1? Depth(pj) lt Di Add the
point pj to Pi. Clear the alpha
buffer and the opacity maps Bprev, Bcurrent.
for (1 ? i ? N) Swap Bprev and
Bcurrent. Render the scene clipping
it with Di-1 and Di. Read back the alpha
buffer to Bcurrent. for each shadow sample
point pk in Pi O prev sample(Bprev ,
pk) O current sample(Bcurrent ,
pk) O interpolate (Depth(pk),
Di-1, Di, O prev, O current) t(pk)
e-?O F(pk) 1.0 - t(pk)
48- The alpha buffer is accumulated each time the
scene is drawn. - The scene is clipped with Di and Di-1
- Speedup factor of 1.5 to 2.0
- In very complex scenes, preorder the scene
geometry so that the scene object is rendered
only for a small number of maps. - ?More speedup and reduce memory requirement for
scene graph
49 for (1 ? i ? N) Determine the opacity
maps depth Di from the light for each shadow
sample point pj in P (1 ? j ? M) Find i
such that Di-1? Depth(pj) lt Di Add the
point pj to Pi. Clear the alpha
buffer and the opacity maps Bprev, Bcurrent.
for (1 ? i ? N) Swap Bprev and
Bcurrent. Render the scene clipping
it with Di-1 and Di. Read back the alpha
buffer to Bcurrent. for each shadow sample
point pk in Pi O prev sample(Bprev ,
pk) O current sample(Bcurrent ,
pk) O interpolate (Depth(pk),
Di-1, Di, O prev, O current) t(pk)
e-?O F(pk) 1.0 - t(pk)
50(No Transcript)
51 for (1 ? i ? N) Determine the opacity
maps depth Di from the light for each shadow
sample point pj in P (1 ? j ? M) Find i
such that Di-1? Depth(pj) lt Di Add the
point pj to Pi. Clear the alpha
buffer and the opacity maps Bprev, Bcurrent.
for (1 ? i ? N) Swap Bprev and
Bcurrent. Render the scene clipping
it with Di-1 and Di. Read back the alpha
buffer to Bcurrent. for each shadow sample
point pk in Pi O prev sample(Bprev ,
pk) O current sample(Bcurrent ,
pk) O interpolate (Depth(pk),
Di-1, Di, O prev, O current) t(pk)
e-?O F(pk) 1.0 - t(pk)
52- Quantization in alpha buffer limits O to be 1.0
at maximum - ? scales the exponential function s. t. O value
of 1.0 represents a complete opaqueness (t 0) - ? 5.56 for 8 bit alpha buffer
- (e-? 2-8)
1.0
1.0
1.0
53(No Transcript)
54- Setup pass
- Compute the visibility order
- Compute shadow values
- Drawing pass
- For each line segment Li ordered due to the
visibility order - Set thickness (alpha value)
- Draw Li with programmable shader
55- Whole opacity maps stored in 3D texture
- nVidias demo
- Koster et al., Real-Time Rendering of Human Hair
using Programmable Graphics Hardware, CGI 2004 - Mertens et al, A Self-Shadow Algorithm for
Dynamic Hair using Density Clustering, EGSR 2004
56- Antialiasing
- Shading through vertex/fragment shader
- Shadows with opacity maps
57(No Transcript)