Computer Graphics - PowerPoint PPT Presentation

1 / 48
About This Presentation
Title:

Computer Graphics

Description:

Consider a ball falling from a height and bouncing on a surface ... (how far the box will bounce, what rotations are involved, which edge or corner ... – PowerPoint PPT presentation

Number of Views:67
Avg rating:3.0/5.0
Slides: 49
Provided by: Michael1227
Category:

less

Transcript and Presenter's Notes

Title: Computer Graphics


1
Computer Graphics
  • Lecture 10
  • 3D Computer Animation (2)
  • Lecturer Heather Sayers
  • E-mail hm.sayers_at_ulster.ac.uk
  • URLhttp//www.infm.ulst.ac.uk/heather

2
Contents
  • Dynamics
  • Collision Detection
  • Broad Phase
  • Narrow Phase
  • Behavioural animation
  • Virtual Reality Modelling Language (VRML)

3
Dynamics
  • Dynamics (motion dynamics), is the detailed study
    and modelling of the motion of objects
  • This involves modelling the laws of physics,
    representing motion of objects in the real world
  • Consider a ball falling from a height and
    bouncing on a surface
  • Physicists have developed accurate mathematical
    formulae to describe and predict what happens in
    this situation

4
Dynamics
  • For example, knowing the effect of gravity and
    various properties of the ball and the surface,
    it is possible to predict how quickly the ball
    will fall as well as how high and how often it
    will bounce (and what direction the motion will
    take, and when the ball will come to rest etc)
  • A ball bouncing on a flat surface is not very
    difficult to predict, but a box bouncing down
    stairs is much more difficult (how far the box
    will bounce, what rotations are involved, which
    edge or corner of the box hits what stair etc)

5
Dynamics
  • In setting up the animation, it is necessary to
    define the physical properties of the objects to
    be animated
  • One of the most basic properties is mass
  • Mass is related to the weight of the objectthe
    weight of an object is the result of gravity
    pulling down on the massiveness of the object
  • On Earth, there is a one-to-one relationship
    between weight and mass.in Outer Space, the
    weight of the object would change (though the
    mass remains the same)

6
Dynamics
  • A massive object is hard to start moving, and
    difficult to stop once it starts moving
  • What makes two objects the same size have
    different masses is related to the density of
    materials from which they are made
  • A balsa wood ball is less massive than an iron
    ball, since balsa wood is less dense than iron
  • Size does play a role in determining mass, which
    is a function of density and volume (consider a
    small iron ball and a large iron ball)

7
Dynamics
  • Another important physical property of objects is
    elasticity
  • In motion dynamics, this term does not mean how
    easily the object can be stretched or deformed
    (like an elastic band)
  • Elasticity refers to how an object bouncesfor
    example, a glass marble is very elastic, since it
    bounces very high when dropped on, say, a
    concrete floor

8
Dynamics
  • Elasticity refers to the amount of energy lost
    when an object makes contact with something else
  • Very elastic objects lose very little energy and
    the object bounces a lotinelastic objects lose a
    lot of energy and the object bounces only a
    little
  • Most motion dynamic animation systems also
    require the definition of friction created by an
    object
  • Friction is a function of how smooth or rough an
    object is (sometimes friction is called
    roughness)
  • Static and kinetic friction are defined

9
Dynamics
  • Static friction prevents stationary objects from
    sliding down an inclined plane
  • If the static friction is very great, the object
    will not slide at all
  • Kinetic friction causes moving objects to come to
    a stop
  • These two types of friction can be defined
    independently of each other
  • For example...

10
Dynamics
  • If the static friction of a ball rolling on a
    surface is set to zero, the ball will slide,
    rather than roll along the surface
  • At the same time, if the kinetic friction is
    high, the ball will stop quicklyif lowered, the
    ball will come to a stop slowly
  • Other major participants in a motion dynamic
    system are gravity and wind

11
Collision Detection
  • In a realistic animation system collision
    detection is an important feature
  • This can be a very expensive feature to
    implement, since the mathematical calculations
    required are lengthy
  • Many systems allow collision detection to be
    turned on or off (sometimes on a per object
    basis)
  • In a boundary-representation (b-rep) system
    objects are defined in terms of the geometry
    representing the boundary of the shape

12
Collision Detection
  • In such a b-rep system, implementing collision
    detection is very costly.consider two objects
  • For accurate collision detection, every surface
    of the animated object must be intersected with
    every surface of the other object (in every
    frame)
  • Many systems exploit bounding boxes or bounding
    spheres to simplify the geometry of objects and
    thereby accelerate the collision detection
  • The bounding spheres are intersected first to
    determine whether or not the objects are close to
    each otherthen the true geometry can be used if
    necessary

13
Collision Detection
  • The standard approach a broad phase followed by
    a narrow phase
  • Broad phase finds those pairs of objects that may
    collide it tries to cull away pairs of moving
    objects that cannot possibly collide
  • Narrow phase determines the exact collision
    point, if any among the remaining objects
  • Many broad phase strategies exist (temporal
    coherence, spatial coherence, bounding volumes)
    most common is bounding volumes

14
Collision Detection
  • Time coherence four-dimensional space-time
    bounding volumes are associated with an object
  • This is built by sweeping out the 3D, geometric
    bounding volume of the object over the lifetime
    of the moving object
  • This can be bounded by a 4D trapezoid for
    simplicity
  • The algorithm is based on calculating the
    earliest time that a collision can occur for any
    pair of objects

15
Time Coherence
  • No further collision tests are done until this
    time has been reached
  • This time is computed for all pairs of objects
  • Prior knowledge of the paths of objects is needed
    only suitable for off-line animation (not
    interactive)

16
Spatial Coherence
  • Spatial coherence scene space is divided into
    unit cells collisions are checked by examining
    each cell to see if it contains more than one
    object
  • Problem best choice of cell size only
    suitable where all objects are similar size
  • Alternative approaches are to use a non-uniform
    subdivision structure such as an octree
  • To check for collisions, the tree is descended,
    and only those regions containing more than one
    object are examined

17
Spatial Coherence
  • The octree eliminates pairs of objects which are
    distant from each other
  • Drawback the octree must be rebuilt at every
    step heavy computational load

18
Bounding Volumes
  • Bounding boxes most commonly used in broad
    phase collision detection
  • 3 types spheres, AABBs (axis aligned), OBBs
    (bounding boxes whose orientation best suits the
    specific object)
  • Spheres are easy but may not be a close fit to
    the object, resulting in too many narrow phase
    tests being performed

19
Bounding Volumes
  • AABBs also result in low complexity and are
    usually more efficient than spheres, but they
    need updating as the object moves
  • OBBs can be precomputed and are defined with
    respect to the object
  • Bounding volumes can be further developed into
    bounding volume hierarchies choices here depend
    on the type of tree used (binary, octree) and how
    to deal with updating after object movement

20
Narrow Phase
  • This is where broad phase collision detection has
    not seen a negative result
  • The technique involves checking the geometries of
    the two objects to determine whether or not a
    collision has taken place
  • Algorithm involves 3 tests

21
Narrow Phase
  • 1. All the vertices of object A are checked to
    see if they are contained by object B and
    vice-versa
  • 2. The edges of A are tested for penetration
    against the faces of B and vice versa
  • 3. The case of two identical polyhedra moving
    through each other with faces perfectly aligned
    is tested for this is done by considering the
    centroid of each face of A and using the same
    test as for vertex inclusion
  • See diagrams in Watt, page 522
  • For interactive systems, collision detection is
    difficult due to the computational demands

22
Behavioural Animation
  • Behavioural animation refers to the ability to
    model a system involving living creatures
    (humans, animals, fish, insects etc)
  • Such movement goes beyond the laws of physics,
    and therefore requires an additional model to be
    incorporated into an animation system
  • Typical elements include the notion of intentions
    and responsesthe attributions of needs, desires
    and the means to satisfy them to an animated
    creature

23
Behavioural Animation
  • Such systems can include the modelling of
    creatures by artificial intelligence or expert
    system techniques
  • In some cases, distributed AI techniques are
    employed to model groups of creatures such as
    flocks of birds or shoals of fish
  • Issues here include how problems are formulated,
    described, allocated and synthesised among the
    grouphow group members communicate and act
    coherently in making decisions, and how
    individual members can reason about their plans
    individually in a group context

24
Behavioural Animation
  • Group modelling involves the ability to avoid
    collision with nearby flockmates, match velocity,
    and the modelling of flock centring (attempting
    to stay close to flockmates)
  • Other issues include the ability of creatures to
    achieve a high level goal Move from A to B,
    Pick up the pen and then decomposing this goal
    into efficient movement while avoiding objects en
    route

25
Behavioural Animation
  • Stimulus response animation (or event animation)
    can be important in this context, where creatures
    respond to some external stimulus in a natural
    way
  • .for example, survival instincts of fish when a
    shark comes into view

26
VRML
  • Virtual Reality Modelling Language
  • Language for describing 3D graphics scenes
  • geometry, materials, surface properties
  • lights
  • cameras
  • animation paths
  • No specification about rendering
  • Internet browser plug-ins availableVRML97 is
    international standardsee specification document
    on www.vrml.org

27
VRML
  • All VRML97 files are ascii text files.first line
    must contain VRML V2.0 utf8 header (my italics)
  • A number of nodes then describe the scene in
    terms of objects (Shape nodes), cameras
    (Viewpoint nodes), lights (Light nodes) etc
  • Each node has a number of fields associated with
    it which typically contain a descriptor and some
    parameters
  • A number of example code snippets illustrate.
  • VRML files have a .wrl extension (world)

28
VRML
  • DirectionalLight
  • color 1 1 1
  • direction 1 0 0
  • This defines a directional light, which is
    assumed to be positioned at infinity, pointing in
    the direction specified above (in the
    x-direction) and with a white colour (R1, G1,
    B1)

29
VRML
  • DEF CameraOne Viewpoint
  • position 7 7 -90
  • orientation 0 1 0 3.14
  • This defines a camera position (a viewpoint)
    which the user has called CameraOne (this could
    be any text string)
  • The position is in (x, y, z) Cartesian Space
  • Orientation is defined by four parameters the
    first three specify the axis of rotation, the
    fourth the angle (in radians) of rotation

30
VRML
  • All nodes have default valuesfor example, in the
    Viewpoint node...
  • in the default position and orientation, the
    viewer is on the Z-axis looking down the -Z-axis
    toward the origin with X to the right and Y
    straight up
  • The code above specifies a rotation (relative) to
    the starting position and orientation) of 180
    degrees about the y-axis
  • This points the camera in the desired direction

31
VRML
  • Shape
  • appearance Appearance
  • material Material
  • diffuseColor 0 0 1
  • geometry Box size 2 2 2
  • A Shape node defines geometrythere are two
    sub-nodes specified abovean appearance and a
    geometryeach of these nodes then has a number of
    fields associated with them

32
VRML
  • Here we have a box (which is a built-in geometry
    in VRML97) which is scaled by a factor of 2 in x,
    y and z (the default size is one unit in x, y,
    and z)
  • Associated with the box is a blue materialthe
    appearance node would also be used to specify
    shininess, transparency and a texture if required
  • Note that indentation is used to make the code
    more readable
  • Some nodes, such as Shape nodes, take other nodes
    as fieldsothers, such as Material, take a number
    of fields which have specific values

33
Geometry
  • Transform
  • translation -55 75 -50
  • children
  • Shape
  • appearance Appearance
  • material Material
  • diffuseColor 0 0 1
  • geometry Box size 2 2 2
  • Here we see that a Transform node is used to
    position the box in space

34
Texture
  • Shape
  • appearance Appearance
  • texture ImageTexture
  • url "wood.jpg"
  • geometry IndexedFaceSet
  • coord Coordinate
  • point -50 0 0, -50 0 -100,50 0 -100,50 0 0
  • texCoord TextureCoordinate
  • point 0 0, 0 1, 1 1, 1 0
  • Here we see a texture associated with a
    boundary-representation of an object..

35
VRML
  • the geometry is specified by an IndexedFaceSet
    node which contains four vertices in Cartesian
    space
  • This defines a plane with four vertices
  • The texture is an image contained in the file
    wood.jpg (note than this is a URL, and so could
    be stored on any accessible website, not just
    locally)
  • The TextureCoordinate node defines how the 2D
    texture map is positioned on the plane, vertex by
    vertex a texture space (u,v) to Cartesian space
    (x,y,z) mapping

36
VRML
  • We shall now look at how VRML supports animation
    and interaction
  • Perhaps the most interesting aspect is the use of
    Sensor nodes, which provide support for
    event-driven animation (stimulus response
    animation)
  • Support can be classified as via environmental
    sensors and pointing-device sensors
  • Each type of sensor defines when an event is
    generated

37
VRML
  • Environmental sensors include Collision,
    Proximity, Time and Visibility
  • The ProximitySensor detects when the user
    navigates into a specified region in the world
  • This can then cause an event to be generated
    which could, for example, cause a door to be
    automatically opened
  • The TimeSensor is a notional clock that has no
    geometry or location associated with it it is
    used to start and stop time-based nodes such as
    interpolators (see later)

38
VRML
  • The VisibilitySensor detects when a specific part
    of the world becomes visible to the user
  • Again, an event can then be triggered, which
    could for example, cause an audible message to be
    heard Warning you are entering a restricted
    area
  • The Collision grouping node detects when the user
    collides with objects in the virtual world, and
    can cause an event (Ouch!)

39
VRML
  • Pointing Device Sensors include Anchor, Cylinder,
    Plane, Sphere and Touch sensors
  • These detect user pointing events such as the
    user clicking on an object (TouchSensor) or the
    mouse moving over an object (see VRML spec for
    more details)
  • These sensors allow a high degree of interaction
    with the virtual environmentfor example the
    notion of teleporting is supportedwhen the user
    clicks on an object the user can be transported
    to another location in the environment (which
    could be in another VRML file on a different
    server, for example)

40
VRML
  • Sensor events can be propagated through the scene
    using ROUTE semantics
  • Another node which is used is the Interpolator
    node which is used to perform keyframe animation
    for smooth motion
  • A number of examples will illustrate these points

41
  • DEF animatedObject Transform
  • translation 0 0 -20
  • children
  • Shape
  • appearance Appearance
  • material Material
  • diffuseColor 1 0 0
  • geometry Box
  • DEF ForwardClick TouchSensor
  • DEF ForwardTimeSource TimeSensor cycleInterval
    10.0
  • DEF ForwardAnimate PositionInterpolator
  • key 0, 1
  • keyValue 0 0 -20, 0 0 -120

42
VRML
  • This example defines a red box positioned at
    (0,0,-20)
  • A TouchSensor is definedthis means that when the
    user clicks on the object an event is
    triggeredwe will see later what happens
  • The TimeSensor specifies a cycleInterval of 10
    secondswhen triggered, this will enable some
    animation to occur for a period of 10 seconds
  • Finally, the PositionInterpolator is used to
    smoothly interpolate the position of the object
    over the lifetime of the animation (10 seconds)

43
VRML
  • The key field of the PositionInterpolator shows
    that there are two key values expected, one at
    the start of the animation (value 0, and one at
    the end, value 1).the values are defined in the
    keyValue field
  • The keyValue field then specifies the position of
    the object at the start of the animation
    (0,0,-20), and at the end (0,0,-120)
  • When clicked, the object will move (smoothly)
    from (0,0,-20) to (0,0,-120) over 10 seconds

44
  • ROUTE ForwardClick.touchTime TO
    ForwardTimeSource.startTime
  • ROUTE ForwardTimeSource.fraction_changed TO
    ForwardAnimate.set_fraction
  • ROUTE ForwardAnimate.value_changed TO
    animatedObject.translation
  • ROUTEs are used to propagate the animation from
    node to node
  • In this way, when the object is clicked, the
    clock is started, this causes the position of the
    object to be interpolated on each clock tick,
    which in turn causes the object itself to move
  • Another example illustrates rotation.

45
  • DEF Ball Transform
  • rotation 0 0 1 3.14
  • Shape geometry Sphere
  • DEF RotateClick TouchSensor
  • DEF RotateTimeSource TimeSensor
  • cycleInterval 100.0
  • DEF RotateAnimate OrientationInterpolator
  • key 0, .25, .50, 0.75,1.0
  • keyValue 0 0 1 0, 0 0 1 0.78, 0 0 1 1.57, 0 0
    1 2.355, 0 0 1 3.14
  • ROUTE RotateClick.touchTime TO
    RotateTimeSource.startTime
  • ROUTE RotateTimeSource.fraction_changed TO
    RotateAnimate.set_fraction
  • ROUTE RotateAnimate.value_changed TO
    Ball.rotation

46
VRML
  • In this example we have a sphere with a
    TouchSensor, TimeSensor and OrientationInterpolati
    on node
  • The OrientationInterpolation node specifies a
    rotation in radians about the axis specifiedin
    this case the z-axis
  • Five key values are defined and the animation
    takes place over 100 seconds
  • So the results of the ROUTEs indicate that when
    clicked, the ball will spin 180 degrees about the
    z-axis for 100 seconds

47
Summary
  • Motion Dynamics
  • mass, elasticity, friction
  • gravity, wind
  • Collision Detection
  • Broad phase
  • Narrow phase
  • Behavioural Animation
  • living creatures
  • intentions, responses
  • AI techniques
  • groups of creatures
  • event animation

48
VRML - Summary
  • Language for defining highly interactive 3D
    environments, suitable for Web use
  • All the usual graphics primitives
    supported.geometry, lights, cameras, textures,
    materials
  • Also support for event-driven animationenvironmen
    tal sensors and pointing device sensors
  • More details found by looking at the
    specification for VRML97 on www.vrml.org
Write a Comment
User Comments (0)
About PowerShow.com