Rendering and Rasterization - PowerPoint PPT Presentation

About This Presentation
Title:

Rendering and Rasterization

Description:

Rendering and Rasterization Lars M. Bishop (lars_at_essentialmath.com) Rendering Overview Owing to time pressures, we will only cover the basic concepts of rendering ... – PowerPoint PPT presentation

Number of Views:32
Avg rating:3.0/5.0
Slides: 55
Provided by: essential4
Category:

less

Transcript and Presenter's Notes

Title: Rendering and Rasterization


1
Rendering and Rasterization
  • Lars M. Bishop (lars_at_essentialmath.com)

2
Rendering Overview
  • Owing to time pressures, we will only cover the
    basic concepts of rendering
  • Designed to be a high-level overview to help you
    understand other references
  • Details of each stage may be found in the
    references at the end of the section
  • No OpenGL/D3D specifics will be given

3
Rendering The Last Step
  • The final goal is drawing to the screen
  • Drawing involves creating a digital image of the
    projected scene
  • Involves assigning colors to every pixel on the
    screen
  • This may be done by
  • Dedicated hardware (console, modern PC)
  • Hand-optimized software (cell phone, old PC)

4
Destination The Framebuffer
  • We store the rendered image in the framebuffer
  • 2D digital image (grid of pixels)
  • Generally RGB(A)
  • The graphics hardware reads these color values
    out to the screen and displays them to the user

5
Top-level Steps
  • We have a few steps remaining
  • Tessellation How do we represent the surfaces of
    objects?
  • Shading How do we assign colors to points on
    these surfaces?
  • Rasterization How do we color pixels based on
    the shaded geometry?

6
Representing Surfaces Points
  • The geometry pipeline leaves us with
  • Points in 2D screen (pixel) space (xs, ys)
  • Per-point depth value (zndc)
  • We could render these points directly by coloring
    the pixel containing each point
  • If we draw enough of these points, maybe we could
    represent objects with them
  • But that could take millions of points (or more)

7
Representing Surfaces Line Segments
  • Could connect pairs of points with line segments
  • Draw lines in the framebuffer, just like a 2D
    drafting program
  • This is called wireframe rendering
  • Useful in engineering and app debugging
  • Not very realistic
  • But the idea is good use sets of vertices to
    define higher-level primitives

8
Representing Surfaces Triangles
  • We saw earlier that three points define a
    triangle
  • Triangles are planar surfaces, bounded by three
    edges
  • In real-time 3D, we use triangles to join
    together discrete points into surfaces

9
A Simple Cube
Cube represented by 8 points (trust us)
Cube represented by triangles (defined by 8
points)
10
The Power of Triangles
  • Flexible
  • We can approximate a wide range of surfaces using
    sets of triangles
  • Tunable
  • A smooth surface can be approximated by more or
    fewer triangles
  • Allows us to trade off speed and accuracy
  • Simple but Effective
  • We already know how to transform and project the
    three vertices that define a triangle

11
Vertices Heavy Points
  • As we will see, we often need to store additional
    data at each distinct point that define our
    triangles
  • We call these data structures vertices
  • Vertices provide a way of storing the added
    values we need to compute the color of the
    surface while rendering

12
Common Per-vertex Values
  • Position
  • The points weve been transforming
  • Color
  • The color of the surface at/near the point
  • Normal vector
  • A vector perpendicular to the surface at/near the
    point
  • Texture coordinates
  • A value used to apply a digital image (akin to a
    decal on a plastic model) to the surface

13
Efficient Rendering ofTriangle Sets
  • Indexed geometry allows vertices to be shared by
    several triangles
  • Uses an array of indices into the array of
    vertices
  • Each set of three indices determine a triangle
  • Uses much fewer verts than (3 x Tris)

14
Indexed Geometry Example
2
1
3
6
0
4
5
18 Individual vertices (exploded view)
7 shared vertices
Configuration
Index list for shared vertices (0,1,2),(0,2,3),(0,
3,4),(0,4,5),(0,5,6),(0,6,1)
15
Even More Efficient
  • Triangle Strips (tristrips)
  • Every vertex after the first two defines a
    triangle (i-2, i-1, i)
  • Tristrips use much shorter index lists
  • (2 Tris) versus (3 x Tris)
  • You may not quite get this stated efficiency
  • You may have to include degenerate triangles to
    fit some topologies
  • For example, try to build our earlier 6-tri fan

16
Tristrip Example
0
2
4
6
1
3
5
7
Index list for shared vertices (0, 1, 2, 3, 4, 5,
6, 7)
17
Triangle Set Implementation
  • Most 3D hardware is optimized for tristrips
  • However, transformed vertices are often cached
    in a limited-size cache
  • Indexed geometry is not perfect using a
    vertex in two triangles may not gain much
    performance if the vertex is kicked out of the
    cache between them
  • Lots of papers available from the hardware
    vendors on how to optimize for their caches

18
Geometry Representation Summary
  • Generally, we draw triangles
  • Triangles are defined by three screen-space
    vertices
  • Vertices include additional information required
    to store or compute colors
  • Indexed primitives allow us to represent
    triangles more efficiently

19
Shading - Assigning Colors
  • The next step is known as shading, and involves
    assigning colors to any and all points on the
    surface of each triangle
  • There are several common shading techniques
  • Well present them from least complex (and
    expensive) to most complex

20
Triangle Shading Methods
  • Per-triangle
  • Called Flat shading
  • Looks faceted
  • Per-vertex
  • Called Smooth or Gouraud shading
  • Looks smooth, but still low-detail
  • Per-pixel
  • Image-based texturing is one example
  • Programmable shaders are more general

21
Flat Shading
  • Uses colors defined per-triangle directly as the
    color for the entire triangle

22
Gouraud (Smooth) Shading
  • Gouraud shading defines a smooth interpolation of
    the colors at the three vertices of a triangle
  • The colors at the vertices (C0,C1,C2) define an
    affine mapping from barycentric coordinates (s,t)
    on a triangle to a color at that point

23
Gouraud Shading
24
Generating Source Colors
  • Both Flat and Gouraud shading require source
    colors to interpolate
  • There are several common sources
  • Artist-supplied colors (modeling package)
  • Dynamic lighting

25
Dynamic Lighting
  • Dynamic lighting assigns colors on a per-frame
    basis by computing an approximation of the light
    incident upon the point to be colored
  • Uses the vertex position, normal, and some
    per-object material color information
  • Dynamic lighting is detailed in most basic
    rendering texts, including ours

26
Imaged-based Texturing
  • Extremely powerful shading method
  • Unlike Flat and Gouraud shading, texturing allows
    for sub-triangle, sub-vertex detail
  • Per-vertex values specify how to map an image
    onto the surface
  • Visual result is as if a digital image were
    pasted onto the surface

27
Imaged-based Texturing
  • Per-vertex texture coordinates (or UVs) define an
    affine mapping from barycentric coords to a point
    in R2.
  • Resulting point (u,v) is a coordinate in a
    texture image

(0,1)
(1,1)
V Axis
(0,0)
(1,0)
U Axis
28
Texture Applied to Surface
  • The vertices of this cylinder include UVs that
    wrap the texture around it

V1.0
V0.0
29
RasterizationColoring the Pixels
  • The final step in rendering is called
    rasterization, and involves
  • Determining which parts of a triangle are visible
    (Visible Surface Determination)
  • Determining which screen pixels are covered by
    each triangle
  • Computing the color at each pixel
  • Writing the pixel color to the framebuffer

30
Computing Per-pixel Colors
  • Even though rasterization is almost universally
    done in hardware today, it involves some
    interesting and instructive mathematical concepts
  • As a result, well cover this topic in greater
    detail
  • Specifically, well look at perspective
    projections and texturing

31
Conceptual Rasterization Order
  • Conceptually, we draw triangles to the
    framebuffer by rendering adjacent pixels in
    screen space one after the other
  • Rasterization draws pixel (x,y), then (x1,y),
    then (x2,y) etc. across each horizontal span
    covered by a triangle
  • Then, it draws the next horizontal span

32
Stepping in Screen Space
  • If we have a fast way to compute the color of a
    triangle at pixel (x1,y) from the color of pixel
    (x,y), we can draw a triangle quite quickly
  • The same is true for computing texture
    coordinates
  • Affine mappings are perfect for this

33
Affine Mappings on Triangles
  • Gouraud shading defines an affine mapping from
    world-space points in a triangle to colors
  • Texturing defines an affine mapping from
    world-space points on a triangle to texture
    coordinates (UVs)
  • Note, however, these are mappings from world
    space to colors and UVs

34
Affine in Screen Space
  • For the moment, assume that we have an affine
    mapping from screen space (xs, ys) to color
  • Note that the depth value is ignored

35
Forward Differences
  • Note the difference in colors between a pixel and
    the pixel directly to the right

Wow thats nice!
36
Forward Differencing
  • This simple difference leads to a useful trick
  • Given the color for a base pixel in a triangle,
    the color of the other pixels are
  • To step from pixel to adjacent pixel (i1 and/or
    j1) is just an addition!
  • This is called forward differencing

37
Perspective
  • Perspective projection is by far the most popular
    projection method in games
  • It looks the most like reality to us
  • Perspective rendering does involve some
    interesting math and surprising visual results

38
Affine Mapping of Texture UVs
  • Over the next few slides, well apply a texture
    to a pair of triangles
  • Well use screen-space affine mappings to compute
    the texture coordinates for each pixel
  • First, lets look at a special case a pair of
    triangles parallel to the view plane

39
Polygon Parallel to ViewAffine Interpolation
Textured view CORRECT
Wire-frame view
40
Looking good so far
  • That worked correctly maybe we can get away
    with using affine mappings (and thus forward
    differencing) for all of our texturing
  • Not so fast next, well tilt the top of the
    square away from the camera

41
Polygon Tilted in Depth Affine Interpolation
Textured view WRONG!
Wire-frame view
42
What Happened?
  • We were interpolating using an affine mapping in
    (2D) screen space.
  • Perspective doesnt preserve affine maps
  • Lets look at the inverse mapping namely, well
    derive the mapping from world space to screen
    space
  • If this mapping isnt affine, then the inverse
    (screen to triangle) cant be affine, either

43
Example Projecting a Line
  • Well project a 2D line segment (y,z) into 1D
    using perspective

Projection of any point (not necessarily on line)
to 1D
Line in world space
44
Projecting a Line Segment
Y-axis
View Plane
D
Z-axis
P
45
Special Case Parallel to View
Y-axis
View Plane
D(DY,0)
Z-axis
P
Affine when projected -Thats why this case worked
46
General Case
Y-axis
D
Z-axis
P
Not affine when projected Thats why this case
was wrong
47
Correct Interpolation
  • The perspective projection breaks our nice,
    simple affine mapping
  • Affine in world space becomes projective in
    screen space
  • Per-vertex depth matters
  • The correct mapping from screen space to color or
    UVs is projective

48
Perspective-correct Stepping
  • The per-pixel method for stepping our projective
    mapping in screen space is
  • Affine forward diff to step numerator
  • Affine forward diff to step denominator
  • Division to compute final value
  • This requires an expensive per-pixel division, or
    even several divisions

49
Polygon Tilted in Depth Correct Perspective
Textured view CORRECT!
Wire-frame view
50
Colors versus Texture UVs
  • Textures need to be perspective-correct because
    the detailed features in most textures make
    errors obvious
  • This is why the example images that we have shown
    involve texturing
  • Gouraud-shaded colors are more gradual and can
    often get by without perspective correction

51
Cheating Faster Perspective
  • SW and HW systems have optimized perspective
    correction by approximation
  • Subdivide tris or scanlines into smaller bits and
    use affine stepping (very popular)
  • Use a quadratic function to approximate the
    projective interpolation
  • Reorder pixel rendering to render pixels of
    constant depth together (very rare)

52
Cheating - Artifacts
  • Some tricks are better than others, and some
    break down in extreme cases
  • Look at some early PS1 games, and youll see
    plenty of interesting and different
    perspective-correction artifacts
  • As with most approximations, you tend to trade
    speed for accuracy

53
References
  • Eberly, David H., 3D Game Engine Design, Morgan
    Kaufmann Publishers, San Francisco, 2001
  • Hecker, Chris, Behind the Screen Perspective
    Texture Mapping (Series), Game Developer
    Magazine, Miller Freeman, 1995-1996

54
References
  • Van Verth, James, Bishop, Lars, Essential
    Mathematics for Games and Interactive
    Applications A Programmers Guide, Morgan
    Kaufmann Publishers, San Francisco, 2004
  • Woo, Mason, et al, OpenGL Programming Guide The
    Official Guide to Learning OpenGL,
    Addison-Wesley, 1999
Write a Comment
User Comments (0)
About PowerShow.com