Title: Towards More Photorealistic Faces
1Towards More Photorealistic Faces
2Agenda
- Introduction to coding systems
- Creating Content
- Transforming a canonical face from photographs
- Animation techniques
- Achieving more realistic faces
- Conclusion
3FACS
- Facial Action Coding System
- Paul Eckman and Wallace Friesen
- Foundation for most facial animation
- Classified facial expression by Action Units AU
- 6 universal categories
- Sadness, anger, joy, fear, disgust and surprise
4FACS
5Creating Content
- Traditional art techniques
- 3D Modeler package
- Triangle based
- Nurbs based
- FFD Based
- Patches Based
- Parameterized
- Morph Targets
6Examples
Triangles Patches
Parameterized
73D Capture Device
- Various Available on Market
- Captures data with a variety of techniques
- Laser Range Finders
- Light Projections
- Shadow Projections
- Sonic Range Finders
- Capture houses can do it for you
- Cyberware
82D Capture Techniques
- Photogrammetry
- Front photo of individual is all thats required.
- Front and side give better 3D coordinates
- Automated feature recognition
- New techniques are being developed every day
- Usually computationally expensive and still needs
user intervention - See http//www.cs.rug.nl/peterkr/FACE/face.html
9Direct Parameterized Models
- Frederic I. Parke 1974 PhD Thesis entitled A
Parametric Model for Human Faces - Used to create parameterized heads using
Interpolation, Scaling and Translation - Steve DiPaola extended this version. Program is
available at http//www.stanford.edu/dept/art/SUDA
C/facade/
10Animation of Face
- Muscle Simulation
- Parke and Waters system is most widely used
- Computer Facial Animation book examples
- Uses 18 muscles in example
- Open Source, with OpenGL
- http//www.crl.research.digital.com/publications/b
ooks/waters/Appendix1/opengl/OpenGLW95NT.html
11Animation of Face (Cont)
12Animation of Face
- Open Source Expression Toolkit
- Based of of Parkes model
- Adds nice user interface
- Exports data from 3DS Max
- Scripting Language
- http//expression.sourceforge.net/
13Facial Muscles
14Lip Syncing
- Phonemes What are they and why is it important?
- Helpful to determine mouth posture
- 45 English phonemes, more or less for other
languages - Mouth Postures
- Nitchie produced 18 significant mouth postures to
match called visemes.
15Lip Syncing (Cont)
- Visemes can match mouth postures with speech
phonemes - Tools
- Microsoft Speech SDK
- Can decipher recorded speech into phonemes
- Code to give mouth positions (visemes) from these
phonemes. - Free
- http//www.microsoft.com/speech
16Lip Syncing (Cont)
- University of Edinburgh
- http//www.cstr.ed.ac.uk/projects/festival/
- Open source, free speech system
17Coarticulation
- Blending of lip-sync postures.
- Pelachaud described algorithm in 1991
- Cohen and Massaro developed a dominance model.
- http//mambo.ucsc.edu/psl/ca93.html
18MPEG4 Facial Animation
- Designed to encode low bandwidth data
- Parameters are sent over transmission lines and
reconstructed at client - Facial Animation Parameters (FAP)
- Facial Definition Parameters (FDP)
- Uses 68 FAPs broken up into 10 groups
- Uses only 14 visemes
- Very similar to Action Units in Parkes model
19FAP Groups
20Viseme and Related Phonemes
21MPEG4 FDP Parameters
22Bump Mapping
- Requires either CPU work or GPU setup
- CPU has to compute tangent space basis vectors
for light - GPU can do it on a vertex by vertex basis as the
vertices come through the pipeline.
23Free Form Deformations (FFD)
- Can be used to scale facial features
- Problems with where to place lattice.
- Good solution is DFFD
- Dirichlet Free Form Deformations
- Basically idea is that the control points are on
the surface - http//cui.unige.ch/moccozet/PAPERS/CA97/
- Rational Free Form Deformations
- Another idea is to have non-axis aligned lattices
- Still working on implementation
24Free Form Deformations (FFD)
25Bump Mapping (Cont).
This shader does the per-vertex dot3 work. It
transforms the light vector by the basis
vectors passed into the shader. The basis
vectors only need to change if the model's
shape changes. The output vector is stored in
the diffuse channel (use the menu to look at
the generated light vector) include
"dot3.h" define V_POSITION v0 define V_NORMAL
v1 define V_DIFFUSE v2 define V_TEXTURE v3
26Bump Mapping (Cont).
define V_SxT v4 define V_S v5 define V_T
v6 define S_WORLD r0 define T_WORLD r1 define
SxT_WORLD r2 define LIGHT_LOCAL r3 vs.1.0
Transform position to clip space and output
it dp4 oPos.x, V_POSITION, cCV_WORLDVIEWPROJ_0 d
p4 oPos.y, V_POSITION, cCV_WORLDVIEWPROJ_1
27Bump Mapping (Cont).
dp4 oPos.z, V_POSITION, cCV_WORLDVIEWPROJ_2 dp4
oPos.w, V_POSITION, cCV_WORLDVIEWPROJ_3
Transform basis vectors to world space dp3
S_WORLD.x, V_S, cCV_WORLD_0 dp3 S_WORLD.y, V_S,
cCV_WORLD_1 dp3 S_WORLD.z, V_S,
cCV_WORLD_2 dp3 T_WORLD.x, V_T,
cCV_WORLD_0 dp3 T_WORLD.y, V_T,
cCV_WORLD_1 dp3 T_WORLD.z, V_T,
cCV_WORLD_2 dp3 SxT_WORLD.x, V_NORMAL,
cCV_WORLD_0 dp3 SxT_WORLD.y, V_NORMAL,
cCV_WORLD_1 dp3 SxT_WORLD.z, V_NORMAL,
cCV_WORLD_2
28Bump Mapping (Cont).
mul S_WORLD.xyz, S_WORLD.xyz, cCV_BUMP_SCALE.w m
ul T_WORLD.xyz, T_WORLD.xyz, cCV_BUMP_SCALE.w
transform light by basis vectors to put it
into texture space dp3 LIGHT_LOCAL.x,
S_WORLD.xyz, cCV_LIGHT_DIRECTION dp3
LIGHT_LOCAL.y, T_WORLD.xyz, cCV_LIGHT_DIRECTION
dp3 LIGHT_LOCAL.z, SxT_WORLD.xyz,
cCV_LIGHT_DIRECTION Normalize the light
vector dp3 LIGHT_LOCAL.w, LIGHT_LOCAL, LIGHT_LOCAL
29Bump Mapping (Cont).
rsq LIGHT_LOCAL.w, LIGHT_LOCAL.w mul LIGHT_LOCAL,
LIGHT_LOCAL, LIGHT_LOCAL.w Scale to 0-1 add
LIGHT_LOCAL, LIGHT_LOCAL, cCV_ONE mul oD0,
LIGHT_LOCAL, cCV_HALF Set alpha to 1 mov
oD0.w, cCV_ONE.w output tex coords mov oT0,
V_TEXTURE mov oT1, V_TEXTURE
30Bump Mapping (Cont)
- tex t0 grab the diffuse texture map
- tex t1 load bump map
- dp3 r0, v0, t1 dot normal with light vector
- mul r0, r0, t0 calculate diffuse value for
bumped normal
31Realistic looking skin
- Use of high detail bump maps dramatically add
realism.
http//www.cc.gatech.edu/cpl/projects/skin/skin.pd
f
32Realistic looking skin (Cont)
- Even better, use diffuse, bump map, specular and
environment map
33Realistic looking skin (Cont)
34Realistic looking skin (Cont)
35Realistic looking skin (Cont)
- Sample Vertex Shader (modify bump map VS)
- mov oT1, V_TEXTURE specular map
- mov oT2, V_TEXTURE
- mov oT3, V_NORMAL pass normal for LDR
36Realistic looking skin (Cont)
- tex t0 grab the diffuse texture map
- tex t1 load bump map
- tex t2 load specular map
- tex t3 load environment lighting LDR
- dp3 r0, v0, t1 dot normal with light vector
- mul r0, r0, t0 calculate diffuse value for
bumped normal - mul r1, t2, t3 mul specular environment
LDR - add r0, r0, r1 add specular diffuse
37Its in the Eyes
- Disney once said the audience watches the eyes.
- Orientation, with respect to the world as well as
each other can give sense of focus and direction
of look. - Also use environment map to give reflective
glossy look.
38References
- Parke, Frederic I. And Waters, Keith. Computer
Facial Animation A K Peters., Wellesly, Mass.
1996 - Pighin, Frederic, et. al. Synthesizing Realistic
Facial Expressions from Photographs. Siggraph
1998 - Gray, Henry. Grays Anatomy of the Human Body
Online http//www.bartleby.com/107/ - Cohen, M. M., Massaro, D. W. Modeling
coarticulation in synthetic visual speech. In N.
M. Thalmann D. Thalmann (Eds.) Models and
Techniques in Computer Animation. Tokyo
Springer-Verlag. 1993 - Nitchie, E.B. How to Real Lips for Fun and
Profit. Hawthorne Books, New York, 1979 - A. Haro, B. Guenter, and I. Essa. "Real-time,
Photo-realistic, Physically Based Rendering of
Fine Scale Human Skin Structure. Proceedings
12th Eurographics Workshop on Rendering, London,
England, June 2001 - Lundgren, Ulf. Description of how the model was
created http//www.lost.com.tj/Ulf/artwork/3d/beh
ind.html
39References
- Osterman, J. and Tekalp, Murat. Face and 2-D
Mesh Animation in MPEG-4. Online at
http//www.cselt.it/leonardo/icjfiles/mpeg-4_si/8-
SNHC_visual_paper/8-SNHC_visual_paper.htm - http//www.nvidia.com/developer
- L. Moccozet and N. M. Thalmann, Dirichlet
free-form deformations and their application to
hand simulation, Proceedings of the Computer
Animation97, pp93-102, 1997.