WEEK 4 : 3D Computer Graphics part 3 - PowerPoint PPT Presentation

1 / 40
About This Presentation
Title:

WEEK 4 : 3D Computer Graphics part 3

Description:

... of introducing extra detail BUT problems may appear if the object moves farther away ... the polygon, partially covered pixels may or may not be rendered ... – PowerPoint PPT presentation

Number of Views:104
Avg rating:3.0/5.0
Slides: 41
Provided by: Amirrudi
Category:
Tags: week | computer | graphics | my | part

less

Transcript and Presenter's Notes

Title: WEEK 4 : 3D Computer Graphics part 3


1
WEEK 4 3D Computer Graphics (part 3)
2
Texture mapping
  • Such a technique(Catmull, 1974) used to achieve
    realism texture maps
  • Can be taken from photographs scanned into a
    computer or simply created by paint/picture
    editing programs
  • For example To make a virtual bookcase look more
    realistic a photograph of wood grain can be
    scanned in and mapped onto the 3D object/polygons

3
  • Matching the scale of the texture to the size of
    the bookcase is very crucial, for example, if the
    texture map is insufficient to cover the entire
    polygon then it can be repeated to ensure
    coverage
  • In this example, the texture map is projected
    onto each flat polygon
  • However, different projection techniques are
    required/used for the objects that have curved
    surfaces cylindrical projection, spherical
    projection, etc

4
  • Its considered as an effective way of
    introducing extra detail BUT problems may appear
    if the object moves farther away
  • For example, if the objects reduces by factor of
    5, the original texture is far too detailed, and
    if its size reduces by a factor of 10, then
    another level of texture detail is needed

5
Dynamic textures
  • Are a sequence of texture maps which applied to a
    surface in quick succession
  • Are used to simulate special effects flames,
    explosions, smoke trails etc

6
Bump mapping
  • Developed by Jim Blinn, 1978
  • Another important technique of increasing image
    realism
  • It uses a texture map to modulate the way light
    is reflected, pixel by pixel
  • For example orange peel, leather, stones, etc
  • This technique does not alter the geometry of the
    surface, but only its appearance, by manipulating
    the way of the light that is reflected from the
    surfaces

7
Environment mapping
  • Was introduced by Blinn and Newell in 1976
  • Used to simulate the effect of polished surfaces
    that reflect their surroundings
  • Its similar to texture mapping, BUT in this
    technique, it does not have to fix the objects
    surface the image moves whenever the observer
    moves to create the impression of a reflection

8
  • It is very useful in the display of car bodies,
    shows how reflective highlights travel over the
    surface of a moving car
  • Its very useful technique in order to get a
    realistic image and becomes move obvious when the
    particular object is animated

9
Displacement mapping
  • By Cook in 1984
  • This technique uses a reference image to
    physically displace the surface features of an
    object
  • Is a very effective way of introducing extra
    surface geometry, without explicit modeling

10
Shadows
  • In CG, virtual shadows also require certain
    levels of computation
  • Shadows are very important in order for us to
    interpret the position and the orientation of
    objects
  • Various techniques have been developed to compute
    shadows and each technique requires a geometric
    analysis of the spatial relationship between
    light sources and the objects in a scene

11
  • Ray tracing is a good way at creating shadows
  • The shadow(from Fig 3.44) is too good, and rarely
    exists like this in real the world as it
    requires a point light source
  • Light sources such as the Sun, light bulbs, and
    fluorescent tubes have an area, bathes an object
    in light energy from different directions ? gives
    rise to an umbra and penumbra

12
  • Umbra the central part of the shadow that
    receives no illumination
  • Penumbra is partially illuminated
  • The overall effects give rise to a soft shadow

13
Radiosity
  • Is a global illumination model that realizes
    photo realistic images
  • It is achieved by simulating the internal
    reflections that arise when an interior is
    illuminated by light sources these are
    represented as a series of equations that are
    solved to find a common solution

14
  • Progressive refinement is another way of solving
    the radiosity model
  • It begins by looking at the brightest source of
    light and distributes its energy throughout the
    model it then selects the next brightest source
    and continues to repeat the algorithm until
    changes in the image are too small to notice

15
Ray tracing
  • Is used to simulate the behavior of light energy
    in the real world
  • For example the Sun, emits energy in the form
    of photon that are absorbed by surfaces and
    reflected back into space and eventually into our
    eyes
  • The photos carry information about intensity and
    color can be used to create an image
  • Various laws have been discovered that describe
    the actions of light emission,reflection and
    refraction

16
  • Fig 2.1 shows how the paths of photons can be
    traced to form an image
  • The scene shows the side view of a virtual camera
    recording the image projected onto a screen
    formed from pixels the pixels are considered to
    be so small that only one ray can pass through
  • This virtual world consists of a Sun, a blue box,
    a red sphere and a background color

17
  • A ray tracing program only considers a single
    pixel at a time, and traces the spatial origins
    of the photons that influence the pixel
  • In this example three things can happen the ray
    could hit the cube/sphere, it could hit the sun,
    or it could miss everything

18
Sun
Background color
Ray A
Blue box
Ray B
Virtual camera
Ray C
Red sphere
Fig 4.1 A side view of rays being traced through
3 pixels
19
  • For example, ray A misses everything and will be
    assigned some background color
  • Ray B intersects the blue box and by using
    geometric laws of reflection we can discover
    where the reflected photons came it consists of
    photons coming from the Sun, striking the blue
    box and reflecting through the middle pixel the
    pixels color will be the color of the sun

20
  • Ray C consists of a ray coming from the
    direction of the camera, through a pixel and
    striking the red sphere BUT this is a reflected
    ray from a blue box, and the pixels color is set
    to blue
  • The resulting image will contain reflections of
    the Sun in the sphere and box reflections of the
    box in the sphere, and vice versa and the
    background color

21
  • Ray tracing program can be made very small as
    their only task is to determine the color of one
    pixel it can be time consuming
  • However, modern implementations of this algorithm
    are very efficient and can be used to create
    wonderful photo-realistic images

22
  • Instead of reflections, ray tracing also reveals
    the casting of shadows, transparency and
    refraction through water and glass
  • Ray tracing can also be used to simulate the
    optical characteristics of a lens normally the
    renders just assume that light rays pass through
    a mathematical pinhole before being used to
    render a perspective view

23
Shaders
  • Are the plug-ins for a renderer
  • Basically, they are programs that are called
    during the rendering process to perform a
    specific tasks
  • For example, a shader can be used to create brick
    pattern over a polygon the pattern is regular,
    so it is relatively easy to describe a logical
    procedure that spaces rectangles in a brick
    fashion, leaving room for the mortar as shown in
    Fig 4.2

24
Fig 4.2 A brick pattern created by a procedure
25
  • The shader will be given the horizontal and
    vertical dimensions of a brick, the spacing for
    the mortar, and the texture to be used
  • As a procedure, so it can be used to decorate any
    surface with this pattern, no matter how large it
    is
  • Shaders present a powerful way to decorate
    surfaces, and played a very important role in
    films such as Antz and A Bugs Life

26
  • Shaders are used to create all sorts of effects
    such as marble, sea states, clouds, mist, smoke,
    fire, waves, bumps, cracks, dirt, stones, etc
  • Darwyn Peachey(Ebert, 1998) lists the advantages
    and disadvantages of shaders(procedural textures)

27
Advantages
  • A procedural representation is extremely compact
    the size of a procedural texture is usually
    measured in kilobytes while the size of a texture
    image is usually measured in megabytes. This is
    especially true for solid textures, since 3D
    texture maps are extremely large

28
  • A procedural representation has no fixed
    resolution. It can provide a fully detailed
    texture no matter how closely you look at
  • A procedural representation covers no fixed area
    its unlimited in extent and can cover an
    arbitrarily large area without seams and unwanted
    repetition of the texture pattern

29
  • A procedural texture can be parameterized, so it
    can generate a class of related textures rather
    than being limited to one fixed texture image

30
Disadvantages
  • A procedural texture can be difficult to build
    and debug programming is often hard, and
    programming an implicit pattern description
    especially hard in non-trivial cases
  • A procedural texture can be a surprise its
    often easier to predict the outcome when you
    scan/paint a texture image, sometime the
    procedural textures are hard to control

31
  • Evaluating a procedural texture can be slower
    than accessing a stored texture image. This is
    the tradeoff between time and space
  • Aliasing can be a problem in procedural textures.
    Anti-aliasing can be tricky and is less likely to
    be taken care of automatically that it is in
    image-based texturing

32
The RenderMan Shading Language
  • Shaders can be defined using the RenderMan
    Shading Language, was developed to simplify their
    design. Another alternative is by using BMRT(Blue
    Moon Ray Tracing)
  • It provides an interface where a shader is
    specified in a C like computer language it
    supports six types of shader light source,
    volume, transformation, displacement, surface and
    image

33
  • A light source shader computes the color of the
    light originating in the light source and
    striking a surface point
  • A volume shader computes the effects of light
    passing through a volume of space from an origin
    to a destination
  • A transformation shader is used to modify
    geometry rather than affect surface color. It
    uses a point in space to determine a new point

34
  • A displacement shader is used to perturb the
    surface of an object point by point, creating
    surface detail without any new geometry
  • A surface shader computes how light interacts
    with a surface, and how it is finally reflected
  • An image shader is used to convert the numbers
    describing a pixel-based image into another
    description

35
  • / glow() a shader for providing centered glow
    in a sphere /
  • surface glow(float attenuation 2)
  • float falloff I.N / Direct incidence has
    cosine closer to 1. /
  • if (falloff
  • / Normalize falloff by lengths of I and N /
  • falloff falloff falloff / (I.IN.N)
  • falloff pow(falloff, attenuation)
  • Ci Csfalloff
  • Oi falloff
  • else
  • Oi 0
  • Reproduced from The RenderMan Companion by Steve
    Upstill(Upstill,1989)

36
Aliasing
  • Video images often contain artifacts that betray
    the raster nature of video technology likewise,
    computer generated images often contain artifacts
    that betray their pixel structure
  • Such artifacts appear in the form of jagged edges
    or irregular patterns on moving texture and are
    called aliasing

37
  • Fig 4.3 shows how the jagged edges arise when a
    polygon is rendered using a simple renderer
  • Because pixels are sampled at their center to see
    if they are covered by the polygon, partially
    covered pixels may or may not be rendered
  • For example, if the polygon just covers the
    center, it will be rendered BUT if it misses the
    center by the smallest distance,its not be
    rendered

38
  • In this case, it is impossible to partially fill
    a pixel it must be assigned a single color ? so
    edges and small polygons will always give rise to
    such aliasing artifacts
  • Such techniques called anti-aliasingn are used to
    reduce the visual impact of aliasing artifacts by
    more sophisticated sampling methods

39
Anti-aliasing
  • Is used to compute the area of the pixel covered
    by a polygon
  • For example, Fig 4.4 shows a pixel covered
    completely by a bright red pentagon if a bright
    green triangle overlaps the same pixel by 50, an
    anti-aliasing algorithm would set the pixel
    yellow(50 red and 50green)
  • The resulting effect is to replace sudden changes
    from red to green pixel with pixels containing
    different mixtures of red and green

40
Fig 4.4 The color of a pixel is determined by
percentage overlap of polygons
Write a Comment
User Comments (0)
About PowerShow.com