Title: Reynold Bailey
1Techniques for Non-Photorealistic Shading using
Real Paint
Reynold Bailey
Research advisor Dr. Cindy Grimm Committee
members Dr. William Smart Dr. Gruia-Catalin Roman
2Techniques for Non-Photorealistic Shading using
Real Paint
Reynold Bailey
Research Vanessa Nudd Chris Kulla Jim
Tucek Patrick Valliancourt
3Motivation and Goals
Provide software that extends the use of the
computer as a tool for creating artistic works.
Specific problem
Lit 3D model.
Artist provided shading change from dark to light.
Apply this style to the model. The result must
look like it was painted with the same technique.
4Outline
Previous Work
Basic Concepts
Existing Techniques
Paint Processing
Rendering Techniques
Metrics
Results
Conclusion and Future Work
5Previous Work
Non-photorealistic rendering
Any technique that produces images in a style
other than realism.
Previous work examined can be categorized as
Techniques that capture color only.
Techniques that capture texture only.
Techniques that capture both texture and color.
Stroke-based techniques.
6Previous Work
Color only
Technical Illustration, Gooch et al. 1998
The Lit Sphere, Gooch et al. 2001
Cartoon Shading, Lake et al. 2000
7Previous Work
Texture only
Half-toning, Freudenberg. 2002
Hatching, Praun et al. 2001
Stippling, Deussen et al. 2000
8Previous Work
Texture only
Engraving, Ostromoukhov. 1999
Charcoal rendering, Majumder et al. 2002
9Previous Work
Color and texture
Volume texturing, Webb et al. 2002
10Previous Work
Stroke-based techniques
WYSIWYG NPR, Kalnins et al. 2002
Painterly rendering, Meier. 1996
11Basic Concepts
Models
Lighting
ID buffer
Object-based operations vs. Image-based operations
Color
Paint
12Models
3D laser scans of real world objects.
3D modeling software.
Large text files
List of vertices.
Connection information.
Faces are typically triangles.
Rendering this raw information gives us a
wire-frame depiction of the model. This is called
the mesh of the model.
13Lighting
Single directional light source
s amount of light at a given vertex.
surface normal.
light vector.
14ID Buffer
We assign a unique ID to every model in a scene.
The ID associated with every pixel in the scene
is stored in an ID buffer.
Enhances performance of our techniques quickly
determine which model in the scene a given pixel
belongs to.
If we assign a color to each ID and render the
contents of the ID buffer, we see how the scene
is segmented.
15Object-Based vs. Image-Based Operations
Affordable display devices are limited to 2D
images.
A 3D object must be projected onto a 2D plane for
viewing
Object-based operations use and manipulate the
properties of the 3D object before projection
takes place.
Image-based operations are those that are
performed on the 2D image that results from the
projection.
16Color Model
Color is represented using the RGB
(Red-Green-Blue) color model.
RGB color model is used in color CRT monitors.
A given color is expressed as (r, g, b). The
values of r, g, and b are normalized to lie in
the interval 0,1.
The RGB color space can be visualized by plotting
the values of r, g and b in a Cartesian
coordinate system.
17Paint Sample
Two distinct properties
Color transition global color change across the
sample. This change is more correctly referred to
as a color trajectory as it defines a path
through color space.
Brush texture local variation within the color
trajectory.
18Existing Techniques
Texture Mapping
Texture Synthesis
Generating tileable texture
19Texture Mapping
Scene with no texture mapping
Scene with texture mapping
20Texture Mapping
Basics
The 2D texture is stored in an n x m array of
texture elements or texels.
Variables s and t are used to index the rows and
columns of the array respectively. These
variables are called the texture coordinates.
Texture coordinates are normalized to lie in the
interval 0,1.
Texture mapping associates a unique point from
the texture with each point on the 3D surface.
The rendered image will appear as if the texture
is glued to the surface.
21Texture Synthesis
Any process that generates texture that is
perceptually similar (but not identical to) some
input sample.
22Texture Synthesis
Image quilting
Stitch together small square patches of the input
texture.
A block from the input sample will be used as a
patch in the resulting texture.
Constraints can be added to make block selection
more specific.
23Texture Synthesis
Image quilting
Naively placing patches side-by-side results in
visible seams where patches meet.
24Texture Synthesis
Image quilting
Allow patches to overlap each other.
An additional constraint that the overlapping
regions of two patches have some degree of
similarity is necessary.
Reduces but does not eliminate visual
discontinuities.
25Texture Synthesis
Image quilting
Perform a minimum error cut on the overlapping
regions.
The minimum error cut finds a path through the
overlapping regions that minimizes the total
error produces the optimal irregular boundary
between patches.
26Generating Tileable Texture
A tileable texture is one that has a seamless
appearance if multiple copies are placed side by
side.
Not easily solved by placing additional
constraints on the image quilting technique.
Use a simpler masking technique.
27Generating Tileable Texture
28Generating a Tileable Texture
29Paint Processing
Paint processing is necessary
Extracting information from the paint sample that
will be used by our rendering techniques.
Provides an avenue for increasing artistic
control.
Paint processing is used to
Extract a smooth, continuous color trajectory
from the paint sample.
Extract the brush texture from the paint sample.
This separation allows us to modify the color
trajectory of a paint sample while keeping the
brush texture fixed.
30Paint Processing
Color trajectory extraction
distribution in color space
31Paint Processing
Color trajectory extraction
Extracting an initial trajectory by averaging the
pixel columns of the paint sample.
average each column
32Paint Processing
Color trajectory extraction
Sort the points using a recursive algorithm.
B
A
33Paint Processing
Color trajectory extraction
Sort the points using a recursive algorithm.
B
M
A
34Paint Processing
Color trajectory extraction
Sort the points using a recursive algorithm.
35Paint Processing
Color trajectory extraction
Sorting using the recursive algorithm.
sorting
36Paint Processing
Color trajectory extraction
smooth trajectory
37Paint Processing
Brush texture extraction
Subtract the smooth color trajectory from each
row of the paint sample.
paint sample
-
smooth trajectory
brush texture intermediate difference image with
RGB values ranging from -1 to 1.
38Paint Processing
Modifying the color trajectory of a paint sample
Add a user specified color trajectory to an
extracted brush texture.
paint sample
user specified trajectory
user created paint sample
39Rendering Techniques
Object-based texture mapping
Image-based texture synthesis
View-aligned 3D texture projection
View-dependent interpolation
40ObjectBased Texture Mapping
t
s
Dynamically assign texture coordinates (s, t) to
every vertex in the model.
s captures shading information.
Must be updated every frame to reflect lighting
changes.
t captures brush texture.
Remains fixed to ensure that the texture is
coherent from frame to frame.
41ObjectBased Texture Mapping
Assign the s texture coordinate using the simple
lighting equation.
Assigning the t texture coordinate- Breath first
search.
Assign a random t value to a randomly selected
vertex v in the model.
Place all neighbors of v into a queue.
Remove a vertex from the front of the queue.
Compute its t value based on the t value of its
closest neighbor that has already been processed.
Place all unprocessed neighbors of the current
vertex on the queue.
Repeat until queue is empty.
42ObjectBased Texture Mapping
Assigning the t texture coordinate
- value currently being computed.
- value of nearest neighbor.
- difference in y coordinates.
- max y min y of entire mesh.
- number of times texture repeats.
- random value.
43ObjectBased Texture Mapping
44ObjectBased Texture Mapping
45ObjectBased Texture Mapping
Difficulties
Texture quality is unstable under changing
lighting conditions.
Sparsely tessellated meshes
Texture is stretched and distorted.
Subdividing the mesh reduces stretching but does
not eliminate it.
Shading is lost on flat surfaces.
46ObjectBased Texture Mapping
shaded scene
painted scene
47ObjectBased Texture Mapping
48ObjectBased Texture Mapping
Summary
Advantages
Runs in real-time.
Zero pre-processing time.
Excellent frame-to-frame coherence (camera
movement).
Disadvantages
Texture quality varies under changing lighting
conditions.
Texture is completely lost on flat surfaces.
49ImageBased Texture Synthesis
Synthesize paint over the region covered by the
model in image space.
dark
light
50ImageBased Texture Synthesis
Synthesize paint over the region covered by the
model in image space.
dark
light
51ImageBased Texture Synthesis
Synthesize paint over the region covered by the
model in image space.
dark
light
52ImageBased Texture Synthesis
Synthesize paint over the region covered by the
model in image space.
dark
light
53ImageBased Texture Synthesis
Generate the color component and the texture
component of the final image separately.
The color component is obtained by using the
shade value at every pixel to index into the
color trajectory.
The texture component is obtained using image
quilting.
Add the resulting color component with the
texture component pixel by pixel.
54ImageBased Texture Synthesis
color only
texture only
55ImageBased Texture Synthesis
56ImageBased Texture Synthesis
Creating animations
Naively re-synthesizing each frame from scratch
produces a shower door effect.
An place an additional constraint that each block
match the previous frame as much as possible
(computed as the squared pixel difference error).
This increases rendering time but reduces the
shower door effect for small camera or light
movements.
Re-synthesize an entirely new image every nth
fame and blend between the values of these
key-frames while recomputing the shading for
every frame.
57ImageBased Texture Synthesis
58ImageBased Texture Synthesis
Summary
Advantages
High quality individual frames.
Disadvantages
Slow rendering time 20 sec to 1 minute per
frame.
Animations suffer from the shower door effect.
59View-Aligned 3D Texture Projection
Recent advances in graphics hardware allows for
the use of volume textures.
A volume texture is a stack of 2D textures.
60View-Aligned 3D Texture Projection
Texture synthesis is used as a pre-processing
step.
Divide input sample into 8 regions of roughly
constant shade.
Use image quilting to synthesize larger versions
(512 x 512) of each region.
Ensure that each synthesized image is tileable.
Create a 3D texture by stacking the tileable
images in order of increasing shade value.
61View-Aligned 3D Texture Projection
62View-Aligned 3D Texture Projection
Texture coordinates s and t are generated by
mapping the horizontal and vertical screen
coordinates to the interval 0,511 respectively.
The value for r is generated by mapping the
lighting value to the interval 0,7.
Hardware automatically performs blending for
values of r that are not whole numbers.
Stroke density is adjusted by a user defined
scale factor on s and t.
Advantage of having tileable textures no seams
will be visible when texture repeats over the
image.
63View-Aligned 3D Texture Projection
64View-Aligned 3D Texture Projection
Difficulties
The texture appears to be fixed to the screen and
the model slides through it.
Keep track of the offset in s and t that is
adjusted as the model moves. Increment this
offset by the average screen space displacement
of the set of vertices most directly facing the
camera.
Gives the illusion that the texture follows the
movement of the camera.
Not perfect but increases visual coherence.
65View-Aligned 3D Texture Projection
66View-Aligned 3D Texture Projection
Summary
Advantages
Almost matches the quality of the individual
frames produced by the image-based texture
synthesis technique.
Runs in real-time.
Fair degree of frame-to-frame coherence.
Disadvantages
Lengthy pre-processing time synthesizing eight
512 x 512 textures and making each tileable may
take as long as 15 minutes.
67View-Dependent Interpolation
Assign specific textures to the important views
of the model.
Perform blending between these textures for all
other views.
User specifies which views are important.
Every face in the model must appear in at least
one of these important views.
This ensures that there are no gaps (unpainted
regions ) in the resulting image.
Typically 12 15 views are sufficient.
68View-Dependent Interpolation
Use image quilting to generate textures for each
of the n views.
Assume v is the first view synthesized.
Some subset of faces in v may be present in v1.
Copy the texture from these faces over to v1.
Used as a guide for the synthesis of the
remaining faces in v1.
Repeat for subsequent views.
This improves frame-to-frame coherency.
This approach causes some distortion as a face in
v may not necessarily have the same shape or size
in v1 due to the curvature of the model.
Create a 3D texture by stacking the n 2D textures.
69View-Dependent Interpolation
Rendering a particular view
Obtain list of faces that are present in that
view.
For each face, perform a lookup to determine
which subset of the n textures covers that face.
Weights are assigned to each of these textures
based on how much the viewing direction
associated with that texture differs from the
current viewing direction.
The highest weight is assigned to the texture
that most closely matches the current viewing
direction.
Use the weights to blend between the textures
associated with the face.
70View-Dependent Interpolation
71View-Dependent Interpolation
Summary
Advantages
Runs in real-time.
Good frame-to-frame coherence.
Disadvantages
Lengthy pre-processing time depends on how many
views the user specifies as being important. 20
sec 1minute for each view.
Loss of texture quality due to distortion to fit
the contours of the mesh.
72Metrics
Goals
Allows a quantitative comparison of the rendering
techniques.
Provides insight that can be used to improve our
techniques.
Three metrics
Texture similarity metric.
Shading error metric.
Frame-to-frame coherency metric.
73Texture Similarity Metric
Types of texture distortion
Stretch.
Rotation or shearing effects.
Discontinuities or poor texture sampling.
Building blocks
Difference in color histograms of the two images.
Perform edge detection to locate vertical,
horizontal, and diagonal edges in both images.
Difference of the corresponding edge histograms
of the two images.
74Shading Error Metric
Measures how close the shade value at a pixel in
the resulting image is to the desired shading
value.
Find the k pixels in the input texture that are
closest to a given pixel in the resulting image.
Average the shade values associated with these k
pixels (s1).
Compare with the correct shade value (obtained
from the lighting equation (s2)).
s1 s2
75Frame-to-Frame Coherency Metric
Image space
Compare the block around a pixel at a given
location in frame v with the block around the
pixel at the same location in frame v1.
Object space
Select a point on the model that is visible in
both frames. Compare the blocks around that point.
76Results
77Conclusion
Paint processing
Separating color trajectory from brush texture.
Manipulate color trajectory while keeping brush
texture constant.
Techniques for shading 3D models using scanned
images of real paint sample
Object-based texture mapping.
Image-based texture synthesis.
View-aligned 3D texture projection.
View-dependent interpolation.
Metrics used to evaluate rendering techniques
Texture similarity.
Shading correctness.
Frame-to-frame coherence.
78Future Work
Improving the rendering techniques
The object-based texture mapping technique and
the image-based texture synthesis technique have
complementary properties.
Explore techniques which produce the efficiency
of object-based texture mapping with the quality
of image-based texture synthesis.
Recent work on performing synthesis directly on
models coupled with 3D texturing could be used.
Extend each of our techniques to capture artistic
silhouettes in styles provided by the user.
Automatic scene composition
Traditional approaches are time consuming.
Need an intuitive method to specify what a scene
should look like.
Build on recent work by Dani Lischinski.
79Questions
?
80Acknowledgements
Editors Dr. Randy Glean Martin Hassett Ronelle
Bailey Eucelia Hill Delvin Defoe Special
thanks Chancellors Graduate Fellowship Program