Title: Computer Graphics
1Computer Graphics
- Classic Rendering Pipeline Overview
2What is Rendering?
- Rendering is the process of taking 3D models and
producing a single 2D picture - The classic rendering approaches all work with
polygons triangles in particular - Does not include creating of the 3D models
- Modeling
- Does not include movement of the models
- Animation/physics/AI
3What is a Pipeline?
- Classic rendering is done as a pipeline
- Triangles travel from stage to stage
- At any point in time, there are triangles at all
stages of processing - Can obtain better throughput with a pipeline
- Much of the pipeline is now in hardware
4Classic Rendering Pipeline
Model ViewTransformations
ModelSpace
ViewSpace
Projection
ViewportMapping
Normalized DeviceSpace
ScreenSpace
5Model Space
- Model Space is the coordinate system attached to
the specific model that contains the triangle - It is easiest to define models in this local
coordinate system - Separation of object design from world location
- Multiple instances of the object
6Model View Transformations
- These are 3D transformations that simply change
the coordinate system with which the triangles
are defined - The triangles are not actually moved
ModelCoordinateSpace
WorldCoordinateSpace
ViewCoordinateSpace
7Model to World Transformation
y
x
z
- Each object is defined w.r.t its own local model
coordinate system - There is one world coordinate system for the
entire scene
8Model to World Transformation
- Transformation can be performed if one knows the
position and orientation of the model coordinate
system relative to the world coordinate system - Transformations place all objects into the same
coordinate system (the world coordinate system) - There is a different transformation for each
object - An object can consist of many triangles
9World to View Transformation
- Once all the triangles are define w.r.t. the
world coordinate system, we need to transform
them to view space - View space is defined by the coordinate system of
the virtual camera - The camera is placed using world space
coordinates - There is one transformation from world to view
space for all triangles (if only one camera)
10World to View Transformation
y
x
-z
- The cameras film is parallel to the view xy
plane - The camera points down the negative view z axis
- At least for the right-handed OpenGL coordinate
system - Things are opposite for the left-handed DirectX
system
11Placing the Camera
- In OpenGL the default view coordinate system is
identical to the world coordinate system - The cameras lens points down the negative z axis
- There are several ways to move the view from its
default position
12Placing the Camera
- Rotations and Translations can be performed to
place the view coordinate system anywhere in the
world - Higher-level functions can be used to place the
camera at an exact position - gluLookAt(eye point, center point, up vector)
- Similar function in DirectX
13Transformation Order
- Note that order of transformations is important
- Points move from model space to world space
- Then from world space to view (camera) space
- This implies an order of
- Pview (Tworld2view) (Tmodel2world) (Pmodel)
- That is, the model to world transform needs to be
applied first to the point
14World to View Details
- Just to give you a taste of what goes on behind
the scenes with gluLookAt - It needs to form a 4x4 matrix that transforms
world coordinate points into view coordinate
points - To do this it simply forms the matrix that
represents the series of transformation steps
that get the camera coordinate system to line up
with the world coordinate system - How does it do that what would the steps be if
you had to implement the function in the API?
15View Space
- There are several operations that take place in
view space coordinates - Back-face culling
- View Volume clipping
- Lighting
- Note that view space is still a 3D coordinate
system
16Back-face Culling
- Back-face culling removes triangles that are not
facing the viewer - back-face is towards the camera
- Normal extends off the front-face
- Default is to assume triangles are defined
counter clock-wise (ccw) - At least this is the default for a right-handed
coordinate system (OpenGL) - DirectXs left-handed coordinate system is
backwards (cw is front facing)
Np
V
17Surface Normal
- Each triangle has a single surface normal
- The normal is perpendicular to the plane of the
triangle - Easy way to define the orientation of the surface
- Again, the normal is just a vector (no position)
C
N
A
B
18Computing the Surface Normal
- Let V1 be the vector from point A to point B
- Let V2 be the vector from point A to point C
- N V1 x V2
- N is often normalized
- Note that order of vertices becomes important
- Triangle ABC has an outward facing normal
- Triangle ACB has an inward facing normal
C
N
A
B
19Back-face Culling
- Recall that V1 . V2 V1 V2 cos(q)
- If both vectors are unit vectors this simplifies
to V1 . V2 cos(q) - Recall that cos(q) is positive if q ?
-90..90 - Thus, if the dot product of the View vector (V)
and the Polygon Normal vector (Np) is positive we
can cull (remove) it
90
Np
q
V
-90
20Back-face Culling
- This technique should remove approximately half
the triangles in a typical scene at a very early
stage in the pipeline - We always want to dump data as early as possible
- Dot products are really fast to compute
- Can be optimized further because all that is
necessary is the sign of the dot product
21Back-face Culling
- When using an API such as OpenGL or DirectX there
is a toggle to turn on/off back-face culling - There is also a toggle to select which side is
considered the front side of the triangle (the
side with the normal or the other side)
22View Volume Clipping
- View Volume Clipping removes triangles that are
not in the cameras sight - The View Volume of a perspective camera is a 3D
shape that looks like a pyramid with its top cut
off - Called a Frustum
- Thus, this step is sometimes called Frustum
clipping - The Frustum is defined by near and far clipping
planes as well as the field of view - More info later when talking about projections
23View Volume Clipping
24View Volume Clipping
- View Volume Clipping happens automatically in
OpenGL and DirectX - You need to be aware of it because it is easy to
get black screens because you set your view
volume to be the wrong size - Also, for some of the game speed-up techniques we
will need to perform some view volume clipping by
hand in software
25Lighting
- The easiest form of lighting is to just assign a
color to each vertex - Again, color is a state-machine type of thing
- More realistic forms of lighting involve
calculating the color value based on simulated
physics
26Real-world Lighting
- Photons emanate from light sources
- Photons collide with surfaces and are
- Absorbed
- Reflected
- Transmitted
- Eventually some of the photons make it to your
eyes enabling you to see
27Lighting Models
- There are different ways to model real-world
lighting inside a computer - Local reflection models
- OpenGL
- Direct3D
- Global illumination models
- Raytracing
- Radiosity
28Local Reflection Models
- Calculates the reflected light intensity from a
point on the surface of an object using only
direct illumination - As if the object was alone in the scene
- Some important artifacts not taken into account
by local reflection models are - Shadows from other objects
- Inter-object reflection
- Refraction
29Phong Local Reflection Model
- 3 types of lighting are considered in the Phong
model - Diffuse
- Specular
- Ambient
- These 3 types of light are then combined into a
color for the surface at the point in question
30Diffuse
- Diffuse reflection is what happens when light
bounces off a matte surface - Perfect diffuse reflection is when light reflects
in all directions
31Diffuse
- We dont actually cast rays from the light source
and scatter them in all directions, hoping one of
them will hit the camera - This technique is not very efficient!
- Even offline techniques such as radiosity which
try and simulate diffuse lighting dont go this
far! - We just need to know the amount of light falling
on a particular surface point
32Diffuse
N
L
q
- The amount of light reflected (the brightness) of
the surface at a point is proportional to the
angle between the surface normal, N, and the
direction of the light, L. - In particular Id Ii cos(q) Ii (N . L)
- Where Id is the resulting diffuse intensity, Ii
is the incident intensity, and N and L are unit
vectors
33Diffuse
- A couple of examples
- Ii 0.8, q 0 ? Id 0.8
- The full amount is reflected
- Ii 0.8, q 45 ? Id 0.57
- 71 is reflected
N
L
q 0
N
q 45
L
34Diffuse
- Diffuse reflection only depends on
- Orientation of the surface
- Position of the light
- Does not depend on
- Viewing position
- Bottom sphere is viewed from a slightly lower
position than the top sphere
35Specular
- Specular highlights are the mirror-like
reflections found on shinny metals and plastics
36Specular
N
R
L
q
q
V
W
- N is again the normal of the surface at the point
in we are lighting - L is again the direction to the light source
- R is the reflection vector
- V is the direction to the viewer (camera)
37Specular
N
R
L
q
q
V
W
- We want the intensity to be greatest in the
direction of the reflection vector and fall off
quite fast around the reflection vector - In particular Is Ii cosn(W) Ii (R . V)n
- Where Is is the resulting specular intensity, Ii
is the incident intensity, R and V are unit
vectors, and n is an index that simulates the
degree of surface imperfection
38Specular
- As n gets bigger the drop-off around R is faster
- At n ?, the surface is a perfect mirror (all
reflection is directly along R - cos(0) 1 and 1? 1
- cos(anything bigger than 0) number lt 1
and(number lt 1) ? 0
39Specular
- Examples of various values of n
- Left diffuse only
- Middle low n specular added to diffuse
- Right high n specular added to diffuse
40Specular
- Calculation of N, V and L are easy
- N with a cross product on the triangle vertices
- V and L with the surface point and the camera or
light position, respectively - Calculation of R requires mirroring L about N,
which requires a bit of geometry - R 2 N ( N . L ) L
- Note Foley p.730 has a good explanation of this
geometry
41Specular
- The reflection vector, R, is time consuming to
compute, so often it is approximated with the
halfway vector, H, which is halfway between the
light direction and the viewing direction H
(L V) / 2 - Then the equation is
- Is Ii (H . N)n
N
H
L
V
a
a
42Specular
- Specular reflection depends on
- Orientation of the surface
- Position of the light
- Viewing position
- The bottom picture was taken with a slightly
lower viewing position - The specular highlights changes when the camera
moves
43Ambient
- Note in the previous examples that the part of
the sphere not facing the light is completely
black - In the real-world light would bounce off of other
objects (like floors and walls) and eventually
some light would get to the back of the sphere - This global bouncing is what the ambient
component models - And models is a very loose term here because it
isnt at all close to what happens in the
real-world
44Ambient
- The amount of ambient light added to the point
being lit is simply Ia - Note that this doesnt depend on
- surface orientation
- light position
- viewing direction
45Phong Local Illumination Model
- The 3 components of reflected light are combined
to form the total reflected light - I KaIa KdId KsIs
- Where Ia, Id and Is are as computed previously
and Ka, Kd and Ks are 3 constants that control
how to mix the components - Additionally, Ka Kd Ks 1
- The OpenGL and DirectX models are both based on
the Phong local illumination model
46OpenGL Model Light Color
- Incident light (Ii)
- Represents the color of the light source
- We need 3 (Iir Iib Iig) values
- Example (1.0, 0.0, 0.0) is a red light
- Lighting calculations to determine Ia, Id, and Is
now must be done 3 times each - Each color channel is calculated independently
- Further control is gained by defining separate
(Iir Iib Iig) values for ambient, diffuse,
specular
47OpenGL Model Light Color
- So for each light in the scene you need to define
the following colors - Ambient (r, g, b)
- Diffuse (r, g, b)
- Specular (r, g, b)
- The ambient Iis are used in the Ia equation
- The diffuse Iis are used in the Id equation
- The specular Iis are used in the Is equation
48OpenGL Model Material Color
- Material properties (K values)
- The equations to compute Ia, Id and Is just
compute how must light from the light source is
reflected off the object - We must also define the color of the object
- Ambient color (r, g, b)
- Diffuse color (r, g, b)
- Specular color (r, g, b)
49OpenGL Model - Color
- The ambient material color is multiplied by the
amount of reflected ambient light - Ka Ia
- Similar process for diffuse and specular
- Then, just like in the Phong model, they are all
added together to produce the final color - Note that each K and I are vectors of 3 color
values that are all computed independently - Also need to define a shininess material value
to be used as the n value in the specular equation
50OpenGL Model - Color
- By mixing the material color with the lighting
color, one can get realistic light - White light,red material
- Green light,same red material
51OpenGL Model - Emissive
- The OpenGL model also allows one to make objects
emissive - They look like they produce light (glow)
- The extra light they produce isnt counted as an
actual light as far as the lighting equations are
concerned - This emissive light values (Ke) are simply added
to the resulting reflected values
52OpenGL Model - Attenuation
- One can also specify how fast the light will fade
as it travels away from the light source - Controlled by an attenuation equation
- A 1 / (kc kl d kq d2)
- Where the 3 Ks can be set by the programmer and d
represents the distance between the light source
and the vertex in question
53OpenGL Model - Equation
- So the total equation is
- Vertex Color Ke A ( ( Ka La )
- ( Kd Ld
(L . N) ) - ( Ks Ls
(((LV)/2) . N)shininess ) ) - For each of the 3 colors (R,G,B) independently
- For each light turned on in the scene
- For each vertex in the scene
- Note that the above equation is slightly
simplified - If either of the dot products is negative, use 0
- Spotlight effect is not included
- Global ambient light is not included
54Light Sources
- There are several classifications of lights
- Point lights
- Directional lights
- Spot lights
- Extended lights
55Projection
- Projection is what takes the scene from 3D down
to 2D - There are several type of projection
- Orthographic
- CAD
- Perspective
- Normal camera
- Stereographic
- Fish-eye lens
56Orthographic Projections
(x, y, z)
- Equations
- x x
- y y
- Main property that is preserved is that parallel
lines in 3D remain parallel in 2D
(x', y')
image plane
57Perspective Projections
(x, y, z)
- Equations
- x f x / z
- y f y / z
- Creates a foreshortening effect
- Main property that is preserved is that straight
lines in 3D remain straight in 2D
(x', y')
f
center of projection
image plane
58Projection in OpenGL
- Set the projection matrix instead of the
modelview matrix - The equations given previously can be turned into
4x4 matrix form what else would you expect! - Orthographic (view volume is a rectangle)
- glOrtho(left, right, bottom, top, near, far)
- Perspective (view volume is a frustum)
- gluPerspective(horzFOV, aspectRatio,
nearClipPlane, farClipPlane)
59FOV Calculation
- It is important to pick a good FOV
- If the image on the screen stays the same size
- The bigger the FOV the closer the center of
projection is to the image plane
60FOV Calculation
- This implies that the human viewer needs to move
their eye closer to the actual screen to keep the
scene from being distorted as the FOV increases - To pick a good FOV
- Put the actual size window on the screen
- Sit at a comfortable viewing distance
- Determine how much that window subtends of your
eyes viewing angle - This method effectively places your eye at the
center of projection and will create the least
distortion
61Normalized Device Space
- This is our first 2D space
- Although some 3D information is often kept
- The major operations that happen in this space
are - 2D clipping
- Pixel Shading
- Hidden surface removal
- Texture mapping
622D Clipping
- When 3D objects were clipped against the view
volume, triangles that were partially inside the
volume were kept - When these triangles make it to this stage they
have parts that hang outside the window ? these
are clipped
63Pixel Shading
- The lighting equations we have seen are all about
obtaining a color at a particular vertex - Pixel shading is all about taking those colors
and coloring each pixel in the triangle - There are 3 main methods
- Flat shading
- Gouraud shading
- Phong shading (not to be confused with the Phong
local reflectance model previously discussed)
64Flat Shading
- A single color is computed the triangle at
- The center of the triangle
- Using the normal of the triangle surface as N
- The computed color is used to shade every pixel
in the triangle uniformly - Produces images that clearly show the underlying
polygons - OpenGL glShadeModel(GL_FLAT)
65Flat Shading Example
66Gouraud Shading
- 3 colors are computed for the triangle at
- Each vertex
- Using neighbor averaged normals as each N
- What is a neighbor averaged normal?
- The average of the surface normals of all
triangles that share this vertex - If triangle model is approximating an analytical
surface then normals could be computed directly
from the surface description - I did this in my sphere examples for the lighting
model
67Gouraud Shading
- Bi-linear interpolation is used to shade the
pixels from the 3 vertex colors - Interpolation happens in 2D
- The advantage Gouraud shading has over flat
shading is that the underlying polygon structure
cant be seen - OpenGL glShadeModel(GL_SMOOTH)
68Gouraud Shading Example
- The problem with Gouraud shading is that specular
highlights dont interpolate correctly - If the object isnt constructed with enough
triangles artifacts can be seen (left ? 320 ?s,
right ? 5120 ?s)
69Phong Shading
- Many colors are computed for the triangle at
- The back projection of each pixel onto the 3D
triangle - Using normals that have been bi-linearly
interpolated (in 2D) from the normals at the 3
vertices - The normals at the 3 vertices are still computed
as neighbor averaged normals - Each pixel gets its own computed color
70Phong Shading
- The advantage Phong shading has over Gouraud
shading is that it allows the interior of a
triangle to contain specular highlights - The disadvantage is that it is easily 4-5 times
more expensive - OpenGL does not support Phong shading
71Phong Shading Example
72Shading Comparison Examples
- Wireframe
- Flat
- Gouraud
- Phong
73Hidden Surface Removal
- The problem is that we have polygons that overlap
in the image and we want to make sure that the
one in front shows up in front - There are several ways to solve this problem
- Painters algorithm
- Z-buffer algorithm
74Painters Algorithm
- Sort the polygons by depth from camera
- Paints the polygons in order from farthest to
nearest
75Painters Algorithm
- There are two major problems with the painters
algorithm - Wasteful of time because every polygon gets drawn
on the screen even if it is entirely hidden - Only handles polygons that dont overlap in the
z-coordinate
76Z-buffer Algorithm
- The Z-buffer algorithm is pixel based
- The Painters is object based
- A buffer of identical size to the color buffer is
created, called the Z-buffer (or depth buffer) - Recall that the color buffer where the resulting
colors are placed (2 color buffers when
double-buffering) - The values in the Z-buffer are all set to the max
- The range of depth values in the view volume is
mapped to 0.0 to 1.0 - OpenGL
- glClearDepth(1.0f)
- glClear(GL_DEPTH_BUFFER_BIT)
77Z-buffer Algorithm
- For each object
- For each projected pixel in the object (x, y)
- If the z value of the current pixel is less than
the Z-buffers value at (x, y) then - Color the pixel at (x, y) in the color buffer
- Replace the value at (x, y) in the Z-buffer with
the current z value - Else dont do anything because a previously
rendered object is closer to the viewer at the
projected (x, y) location
78Z-buffer Algorithm
- Pros
- Objects can be drawn in any order
- Objects can overlap in depth
- Hardware supported in almost every graphics card
- Cons
- Memory cost (1024x768 gt 786K pixels)
- At 4 bytes per pixel ? gt 3M bytes
- 4 bytes often necessary to get the resolution we
want in depth - Some pixels are still drawn and then replaced
- Problems with transparent objects
79Z-buffer Algorithm and Transparency
- Transparent colors need to be blended with the
colors of the opaque objects behind - There are different blending functions (more
later) - To make blending work, the correct opaque color
needs to be known ? the opaque objects need to be
drawn before the transparent ones - However, we still have the following problem
- Black opaque (drawn first)
- Blue transparent (drawn second)
- Red transparent (drawn third)
80Z-buffer Algorithm and Transparency
- The problem
- If blue sets the Z-buffer to its depth value then
the red is assumed to be blocked by the blue and
wont get its color blended properly - Solutions
- Order the transparent objects from back to front
- Fails transparent objects can overlap in depth
- Turn off Z-Buffer test for transparent objects
- Fails transparent objects wont be blocked by
opaque
81Z-buffer Algorithm and Transparency
- Correct Solution
- Make the Z-buffer be read-only during the drawing
of the transparent objects - Z-buffer tests are still done so opaque objects
block transparent objects that are behind them - But Z-buffer values are not changed so
transparent objects dont block other transparent
objects that are behind them - OpenGL
- glDepthMask(GL_TRUE) // read/write
- glDepthMask(GL_FALSE) // read-only
82Texture Mapping
- Real objects contain subtle changes in both color
and orientation - We cant model these objects with tons of little
triangles to capture these changes - Modeling would be too hard
- Rendering would be too time consuming
- Use Mapping techniques to simulate it
- Texture mapping handles changes in color
83Texture Mapping
- Model objects with normal sized polygons
- Map 2D images onto the polygons
84The Stages of Texture Mapping
- There are 4 major stages to texture mapping
- Obtaining texture parameter space coordinates
from 3D coordinates using a projector function - Mapping the texture parameter space coordinates
into texture image space coordinates using a
corresponder function - Sampling the texture image at the computed
texture space coordinates - Blending the texture value with the object color
85Projector Functions
- A projector function is simply a way to get from
a 3D point on the object to a 2D point in texture
parameter space - Texture Parameter space is represented by two
coordinates (u, v) both in the range 0..1)
1
V
0
0
1
U
86Projector Functions
- Projector functions can be computed automatically
during the rendering process by using an
intermediate object - Spherical mapping
- Cylindrical mapping
- Planar mapping
- Or projector functions can be pre-computed during
the modeling stage and their results stored with
each vertex
87Intermediate Objects in Projector Functions
- An imaginary intermediate object is placed
around the the modeled object being textured - Points on the modeled object are projected onto
the intermediate object - The texture parameter space is wrapped onto the
intermediate object in a known way
88Intermediate Objects in Projector Functions
- Also see p.121 Real-time rendering
89Intermediate Objects in OpenGL
- OpenGL has a glTexGen function that allows one to
specify the type of the type of projector
function used - Often this is used to do special types of
mapping, such as environment mapping (later) - The quadric objects, which are basically the same
shape as the intermediate objects, have their
texture coordinates generated in this way
90Pre-computing Projector Functions
- Texture coordinates are simply defined at each
vertex that directly map the 3D vertex into 2D
parameter space - In OpenGL
- glBegin(GL_QUADS)
- glTexCoord2f(0, 1) glVertex3f(20, 20, 2) //
A - glTexCoord2f(0, 0) glVertex3f(20, 10, 2) //
B - glTexCoord2f(1, 0) glVertex3f(30, 10, 2) //
C - glTexCoord2f(1, 1) glVertex3f(30, 20, 2) //
D - glEnd( )
D
A
B
C
91Texture Editors
- Texture editors can be used to help in the manual
placement of texture coordinates
92Interpolating Texture Coordinates
- Texture coordinates only provide (u, v) values at
the vertices of the polygon - We still need to fill in each pixel location in
the interior of the polygon - These are filled by bi-linearly interpolating the
texture parameter space coordinate in 2D space - This can be done at the same time as we do the
interpolation for lighting and depth calculations
93Corresponder Functions
- The Corresponder function takes the (u, v) values
and maps them into the texture image space (e.g.
128 pixels by 64 pixels)
X
0
128
1
0
V
Y
0
0
1
U
64
94Corresponder Functions
- Corresponder function allow us to
- Change the size of the image used without having
to redefine our projector functions (or redefine
all our texture coordinates) - Map to subsections of the image
- Specify what happens outside the range 0..1
95Mapping to a Subsection
- Allows you to store several small texture images
into a single large texture image - By default it maps to the entire texture image
X
0
128
0
1
V
Y
0
0
1
64
U
96What Happens Outside 0..1
- The 3 main approaches are
- Repeat/Tile Image is repeated multiple times by
simply dropping the integer part of the value - Clamp Values are clamped to the range, resulting
in the edge values being repeated - Border Values outside the range are displayed in
a given border color
97Sampling
- In general, the size of the texture image and the
size of the projected surface is not the same - If the size of the projected surface is larger
than the size of the texture image then the image
will need to be blown up to fit on the surface - This process is called Magnification
- If the size of the projected surface is smaller
than the size of the texture image then the image
will need to be shrunk to fit on the surface - This process is called Minification
98Magnification
- Recall that there are more pixels than texels
- Thus, we need to sample the texels for each pixel
to determine the pixels texture color - That is, there is no 1-1 correlation
- There are 2 main ways to sample texels
- Nearest neighbor
- Bi-linear interpolation
99Magnification
- Nearest neighbor sampling simply picks the texel
closest to the projected pixel - Bi-linear interpolation samples the 4 texels
closest to the projected pixel and linearly
interpolates their values in both the horizontal
and vertical directions
100Magnification
- Nearest neighbor can give a crisper feel when
little magnification is occurring, but bi-linear
is usually the safer choice - Bi-linear also takes 4 times as long
- Also see p.130 Real-time rendering
101Minification
- Recall that there are more texels than pixels
- Thus, we need to integrate the colors from many
texels to form a pixels texture color - However, integration of all the associated texels
is nearly impossible in real-time - We need to use a sampling technique
102Minification
- 2 common sampling techniques are
- Nearest neighbor samples the texel value at the
center of the group of associated texels - Bi-linear interpolation samples 4 texel values in
the group of associated texels and bi-linearly
interpolates them - Note that the sampling techniques are the same as
in Magnification, but the results are quite
different
103Minification
- For nearest neighbor, severe aliasing artifacts
can be seen - They are even more noticeable as the surface
moves with respect to the viewer - Temporal aliasing
- See NeHe Lesson 7 (f to change cycle through
filtering modes, page up/down to go forward and
back) - See p.132 Real-time rendering
104Minification
- Bi-linear interpolation is only slightly better
than nearest neighbor for minification - When more than 4 texels need to be integrated
together this filter shows the same aliasing
artifacts as nearest neighbor - See NeHe Lesson 7 (second filter in cycle)
105Mipmaps
- mip stands for multum in parvo which is Latin
for many things in a small place - The basic idea is to improve Minimization by
providing down sampled versions of the original
texture image a pyramid of texture images -
106Mipmaps
- When Minification would normally occur, instead
use the mipmap image that most closely matches
the size of the projected surface - If the projected surface falls in between mipmap
images - Use nearest neighbor to pick the mipmap image
closest to the projected surface size - Or use linear interpolation to pick combine
values from the 2 closest mipmap images
107Sampling in OpenGL
- OpenGL allows you to select a Magnification
filter from - Nearest or Linear
- OpenGL allows you to select a Minification filter
from - Nearest or Linear (without mipmaps)
- Nearest or Linear texel sampling with nearest or
linear mipmap selection (4 distinct choices) - (Bi-)linear texel sampling with linear mipmap
selection is often called tri-linear filtering - See NeHe Lesson 7 (adjust choice in code)
108Blending the Texture Value
- Once a sample texture value has been obtained, we
need to blend it with the computed color value - There are 3 main ways to perform blending
- Replace Replace the computed color with the
texture color, effectively removing all lighting - Decal Like replace but transparent parts of the
texture are blended with the underlying computed
color - Modulate Multiply the texture color by the
computed color, producing a shaded and textured
surface
109Blending Restrictions
- The main problem with this simple form of texture
map blending is that we can only blend with the
final computed color - Thus, the texture will dim both the diffuse and
specular terms, which can look unnatural - A dark object still may have a bright highlight
- If diffuse and specular components can be
interpolated across the pixels independently then
we could blend the texture with just the diffuse - This is not part of the Classic Rendering
Pipeline but several vendors have tried to add
implementations
110Texture Set Management
- Each graphics card can handle a certain number of
texture in memory at once - Even though memory in 3D cards has increased
dramatically recently, the general rule of thumb
is that you never have enough texture memory - The card usually has a built-in strategy, like
LRU, to manage the working set - OpenGL allows you to set priorities on the
textures to enable you to adjust this process
111Viewport Mapping
- This is the final transformation that occurs in
the Classic Rendering Pipeline - The Viewport transformation simply maps the
generated 2D image to a portion of the 2D window
used to display it - By default the entire window is used
- This is useful if you want several views of a
scene in the same window