Title: Visible Surface Detection
1UBI 516 Advanced Computer Graphics
- Visible Surface Detection
Aydin Öztürk ozturk_at_ube.ege.edu.tr http//www.ube.
ege.edu.tr/ozturk
2Review Rendering Pipeline
- Almost finished with the rendering pipeline
- Modeling transformations
- Viewing transformations
- Projection transformations
- Clipping
- Scan conversion
- We now know everything about how to draw a
polygon on the screen, except visible surface
detection.
3Invisible Primitives
- Why might a polygon be invisible?
- Polygon outside the field of view
- Polygon is backfacing
- Polygon is occluded by object(s) nearer the
viewpoint - For efficiency reasons, we want to avoid spending
work on polygons outside field of view or
backfacing - For efficiency and correctness reasons, we need
to know when polygons are occluded
4View Frustum Clipping
- Remove polygons entirely outside frustum
- Note that this includes polygons behind eye
(actually behind near plane) - Pass through polygons entirely inside frustum
- Modify remaining polygonsto pass through
portions intersecting view frustum
5View Frustum Clipping
- Canonical View Volumes
- Remember how we defined cameras
- Eye point, lookat point, v-up
- Orthographic Perspective
- Remember how we define viewport
- Width, height (or field of view, aspect ratio)
- These two things define rendered volume of space
- Standardize the height, length, and width of view
volumes
6View Frustum Clipping
7Review Rendering Pipeline
- Clipping equations are simplified
- Perspective and Orthogonal (Parallel) projections
have consistent representations
8Perspective Viewing Transformation
- Remember the viewing transformation for
perspective projection - Translate eye point to origin
- Rotate such that projection vector matches z
axis - Rotate such that up vector matches y
- Add to this a final step where we scale the volume
9Canonical Perspective Volume
10Clipping
- Because both camera types are represented by same
viewing volume - Clipping is simplified even further
11Visible Surface Detection
- There are many algorithms developed for the
visible surface detection -
- ? Some methods involve more processing time.
- ? Some methods require more memory.
- ? Some others apply only to special types of
objects.
12Classification of Visible-Surface Detection
Algorithms
- They are classified according to whether they
deal with object definitions or with their
projected images. -
- ? Object space methods.
- ? Image-space methods.
-
- Most visible-surface algorithms use image
space method.
13Back-Face Detection
- Most objects in scene are typically solid
14Back-Face Detection (cont.)
- On the surface of polygons whose normals point
away from the camera are always occluded
15Back-Face Detection
yv
N(A,B,C)
xv
- This test is based on inside-outside test. A
point (x,y,z) is inside if
V
zv
- We can simplify this test by considering the
normal vector vector N to a polygon surface,
which has Cartesian components (A,B,C). - If V is a vector in the viewing direction from
eye then this polygon is back face if V?N gt 0. - If the object descriptions have been converted to
projection coordinates and viewing direction is
parallel to zv axis then V(0, 0, Vz) and V?NVzC
so that we only need to consider the sign of C.
16Depth-Buffer (z-Buffer) Method
- This method compares surface depths at each pixel
position on the projection plane. - Each surface is processed separetly, one point at
a time across the surface. - Surface S1 is closest to view plane, so its
surface intensity value at (x,y) is saved.
S3
S2
yv
S1
xv
(x,y)
zv
17Steps for Depth-Buffer (z-Buffer) Method(Cont.)
- Initialize the depth buffer and refresh buffer
s.t. for all buffer positions (x,y) - depth(x, y) 0,
- refresh(x, y) Ibackground
-
18Steps for Depth-Buffer (z-Buffer) Method(Cont.)
- For each position on each polygon surface,
compare depth values to previously stored values
in depth buffer to determine visibility. - ? Calculate the depth z for each (x,y)
position - on the polygon.
- ? If z gtdepth(x,y), then
- depth(x, y) z,
- refresh(x, y) Isurf(x,y).
- where
- Ibackground is the value for the
bacground intensity and - Isurf(x,y), is the projected intensity
value for the surface at (x,y). -
19 Depth-Buffer (z-Buffer) Calculations.
- Depth values for a surface position (x,y) are
calculated from the plane equation - z-value for the horizontal next position
-
- z-value down the edge(starting at top vertex)
Y
Y-1
X X1
top scan line
Left edge intersection
bottom scan line
20 Scan-Line Method
yv
B
E
F
Scan Line 1
A
Scan Line 2
Scan Line 3
H
S1
S1
S2
C
D
G
xv
21Depth-Sorting Algorithm (Painters Algorithm)
-
- This method performs the following basic
functions - Surfaces are sorted in order of decreasing order.
- Surfaces are scan converted in order, starting
with the surface of greatest.
22Depth-Sorting Algorithm (Painters Algorithm)
- Simple approach render the polygons from back to
front, painting over previous polygons
23Depth-Sorting Algorithm (Painters Algorithm)
24Depth-Sorting Algorithm (Painters Algorithm)
25Painters Algorithm Problems
- Intersecting polygons present a problem
- Even non-intersecting polygons can form a cycle
with no valid visibility order
26Analytic Visibility Algorithms
- Early visibility algorithms computed the set of
visible polygon fragments directly, then rendered
the fragments to a display - Now known as analytic visibility algorithms
27Analytic Visibility Algorithms
- What is the minimum worst-case cost of computing
the fragments for a scene composed of n polygons? - Answer O(n2)
28Analytic Visibility Algorithms
- So, for about a decade (late 60s to late 70s)
there was intense interest in finding efficient
algorithms for hidden surface removal - Well talk about two
- Binary Space-Partition (BSP) Trees
29Binary Space Partition Trees (1979)
- BSP tree organize all of space (hence partition)
into a binary tree - Preprocess overlay a binary tree on objects in
the scene - Runtime correctly traversing this tree
enumerates objects from back to front - Idea divide space recursively into half-spaces
by choosing splitting planes - Splitting planes can be arbitrarily oriented
30BSP Trees Objects
31BSP Trees Objects
32BSP Trees Objects
33BSP Trees Objects
34BSP Trees Objects
35Rendering BSP Trees
- renderBSP(BSPtree T)
- BSPtree near, far
- if (eye on left side of T-gtplane)
- near T-gtleft far T-gtright
- else
- near T-gtright far T-gtleft
- renderBSP(far)
- if (T is a leaf node)
- renderObject(T)
- renderBSP(near)
36Rendering BSP Trees
37Polygons BSP Tree Construction
- Split along the plane containing any polygon
- Classify all polygons into positive or negative
half-space of the plane - If a polygon intersects plane, split it into two
- Recurse down the negative half-space
- Recurse down the positive half-space
38Notes About BSP Trees
- No bunnies were harmed in our example.
- But what if a splitting plane passes through an
object? - Split the object give half to each node
Ouch
39BSP Demo
- Nice demo
- http//symbolcraft.com/graphics/bsp/
40Summary BSP Trees
- Advantages
- Simple, elegant scheme
- Only writes to framebuffer (i.e., painters
algorithm) - Thus very popular for video games (but getting
less so) - Disadvantages
- Computationally intense preprocess stage
restricts algorithm to static scenes - Worst-case time to construct tree O(n3)
- Splitting increases polygon count
- Again, O(n3) worst case
41UBI 516 Advanced Computer Graphics
- OpenGL Visibility Detection Functions
42OpenGL Backface Culling
- glEnable(GL_CULL_FACE)glCullFace(mode)//
mode GL_BACK, GL_FRONT, GL_FRONT_AND_BACK
-o - glDisable(GL_CULL_FACE)
43OpenGL Depth Buffer Functions
- Set display ModeglutDisplayMode( GLUT_DOUBLE
GLUT_RGB GLUT_DEPTH ) - Clear screen and depth buffer every time in the
display functionglClear( GL_COLOR_BUFFER_BIT
GL_DEPTH_BUFFER_BIT ) - Enable/disable depth bufferglEnable(
GL_DEPTH_TEST )glDisable( GL_DEPTH_TEST )
44OpenGL Depth-Cueing Function
- We can vary the brigthness of an objectglEnable
( GL_FOG )glFogi ( GL_FOG_MODE, mode)//
modes GL_LINEAR, GL_EXP or GL_EXP2. .
.glDisable ( GL_FOG )