Title: Implementing Avatars within Virtual Environments
1Implementing Avatars withinVirtual Environments
- Krist Norlander, LT, USNR
- MV4473 Virtual Worlds Simulation Systems
2Making a believable scenario!!!
- ..they need to sweatwithout that we are wasting
our time
- George Solhan on VIRTE Program
3Outline
- Definition
- Description Intent of avatar..
- Advantages Why we want avatars
- Limitations Why we have problems
- Implementation How we make it work
- Applications What is available
- Problems yet to solve!
4Points of Reference
- Look back at these class topics
- HMD
- Gloves
- Motion tracking
- Haptic feedback
- Gesture recognition
- Locomotion
5Avatar Definition
- Merriam-Webster Online Dictionary
- An incarnation in human form
- An embodiment (as of a concept or philosophy)
often in a person
- Virtual Reality Domain
- Symbolic representation of a real person within
a virtual world.
6Avatar Intent
- Looks
- Moves
- Interacts..
- similar to a human
7False Implementation
- Looks
- Moves
- Interacts
- not quite like human
8Surrealistic Avatars
- Chat rooms
- Multi-User Domains (MUD)
- Games
9Advantages
- Humans naturally interact with humans
- Abnormal reaction to machines
- Provide familiarity to VE user
- Potential for improved proprioception
- User can project self into VE
10Limitations
- Limited articulation of body segments
- No dynamic sizing to meet user dimensions
- Limited tracking of user articulation
- Difficult translation of articulation to avatar
- No dynamic representation of user appearance
- Limited synchronization of user expressions
- Limited tactile feedback from avatar to user
- Computationally expensive to render avatars
11Major Players
- Humanoid Animation Working Group
- Part of Web3D Consortium
- H-Anim Standard
- Center for Human Modeling and Simulation
- University of Pennsylvania
- Dr. Norman I. Badler, Director
- Jack
- Boston Dynamics, Inc.
- DI-Guy
(Click pictures to see more)
12Tackling Body Tracking
- Divide human body into movable segments
- Define rotation limitations of segments
- Track user segments
- Translate user movements into data
- Inject movement data into avatar
- Standardization thru H-Anim 1.1 specification
h-anim.org/Specifications/H-Anim1.1
13Body Segment Diagram
H-Anim 1.1
DI-Guy
14Tracking User
- Ascertain posture thru inertial/magnetic sensors
attached to limbs
- Passive measurement of physical quantities
directly related to motion and attitude (i.e..
Sourceless)
- Segments oriented independently
- Segments positioned relative to each other by
adding rotated vectors
- Position data needed for reference point to place
avatar in VE
15NPS Prototype
- Wireless full body tracking system based on
inertial/magnetic orientation sensing
- MARG sensors are used to determine posture
- Reference point obtained thru an optical or
ultrasonic tracking system
16User Representation
- Creating an avatar that resembles user
- Obtain user dimensions
- Limb segment lengths, widths, density, hardness,
etc.
- Obtain user attributes
- Colors, joint limits, behaviors, etc.
- Create avatar using available data
- Significant technology limitations exist!
17The Laser Scan
- Laser-triangulation method
- Capable of capturing textures
- Full scan in
- Hair/masking issues
18Graphical Scan Output
19Putting It Together
- Creation of avatars thatrepresent people in a
believable wayallow for user controlled
actionscan be used for CGFprovide feedback
to user
20Issues/Difficulties
- Linking bipedal motion between user and avatar
- Linking avatar interaction with VE to user
- Solutions often use scripted gestures/actions
- Believability
- Animation LOD
- Not just polygon LOD
21Examples
- FPS Gaming
- Unreal, Half-life, Quake, Rainbow Six
- Multi-User Domains
- Commercial Simulations
- Vega, BDI
- NPS Research
- Bachmann, Dutton, Storms, Norlander, VIRTE
22Military Application
Team coordination
Building clearing
Initiative based tactics
Images taken from Rogue Spear from Red Storm,
Inc.
23Highest Potential Use
- Jack
- Generic Human model
- Articulation supports ergonomic data analysis
- GI-Guy
- Human Simulation
- Scripted behaviors
- Support for
- DIS/HLA
- Input control
(Click picture to see more about Jack)
24BDI DI-Guy
- What is DI-Guy?
- DI-Guy is software for adding life-like human
characters to simulated environments. Each
character moves realistically, responds to simple
high-level commands, and travels about the
environment as directed. DI-Guy characters make
seamless transitions from one activity to the
next, moving naturally like a real person. DI-Guy
is fully interactive, with all behavior occurring
in real time.
Quoted from BDI DI-Guy documentation
25BDI DI-Guy (continued)
- What is the Design Goal of DI-Guy?
- DI-Guy is designed to simplify the task of adding
life-like human characters to real-time
interactive simulations. The goal is to allow
users to concentrate on telling DI-Guy where to
go and what to do, while freeing them from
low-level details such as joint angle control,
motion generation, graphics hierarchy management,
model and texture creation, and animation. - DI-Guy is designed to provide a set of integrated
and encapsulated human figures that provide
versatile and visually engaging behavior.
Quoted from BDI DI-Guy documentation
26DI-Guy Characters
- What Characters Does DI-Guy Include?
- DI-Guy includes a whole family of characters and
behavior. There are soldiers, flight deck crew,
chem/bio characters, and ordinary men and women,
and we are adding new characters all the time.
Starting with the 4.0 release of DI-Guy, you can
modify the appearance of characters to suit your
needs. If we dont have the characters you need
and you do not want to make them yourself, give
Boston Dynamics a call We can create new
characters to your specifications.
Quoted from BDI DI-Guy documentation
27DI-Guy Graphic LOD
- Adjust polygon rendering based on viewed object
distance
- Avatars inherently have high polygon count
Quoted from BDI DI-Guy documentation
28Motion Level of Detail
- Reduce computational needs by reducing avatar
motions
- DI-Guy soldier characters support motion LODs
- Characters viewed from a long distance do not
display so much detail in their motion as
characters viewed up close
- Fewer joint positions are calculated and updated
for display
- Motion LODs are not switch automatically by the
graphics library
Quoted from BDI DI-Guy documentation
29DI-Guy Motion LOD
Quoted from BDI DI-Guy documentation
30Realistic Motion
- Used to best advantage, a DI-Guy character will
move with realism, make smooth transitions from
one action to the next, and maintain accurate
position and orientation in the synthetic
environment, all at the same time. - There must be consistency among the speed,
heading, position, and desired action. Such
simulations should
- Allow sufficient time for each transition from
one activity to the next,
- Limit accelerations to the human range, and
- Specify travel rates that are consistent with
each gait.
- Humans (and animals) normally travel at a narrow
range of speeds for each gait.
- This is different than cars, airplanes, and
tanks.
Quoted from BDI DI-Guy documentation
31DI-Guy LOD Demo
- Indirect control of
- Graphic LOD
- Motion LOD
- Gaze
- Aim
- State
- or direct limb control
32User Controlled Motion
- When DI-Guy is driven by data from a live human
(e.g. tread-Port, I-Port, joysticks,
Omni-Directional Treadmill, etc.), the interface
device should have output filters that provide
realistic transition rates, accelerations, and
maximum travel speeds. - The same kinds of output filters should be used
for CGF/SAF.
- Variation will provide an appropriate trade-off
between smoothness of motion and travel
precision, depending on the requirements of your
application.
Quoted from BDI DI-Guy documentation
33Networking Avatars
- Interactive avatars implies distributed networks
- Every human is uniqueshouldnt avatars be?
- Avatars have graphical description
- Graphical model
- Graphical LOD
- Avatars have state description
- Motion model
- Motion LOD
34DIS Lifeform States
- DIS provides limited description for avatar
control
35DI-Guy DIS States
- DI-Guy enhances DIS protocol
36Other Networked Apps
- H-Anim and VRML provides standards
- User can define models
- Animation is model independent
- No automated LOD
- No motion LOD standard
(Click pictures to see more)
37Goals
- Virtual human avatars with articulated joint
structure allowing for both scripted movement and
real-time networked control
- Produce an avatar that is as realistic as
possible, but can still be rendered efficiently
on todays computers
- Source code platform independent, open source and
worldwide deployable
- Visually compelling for acceptance
38Related Work
- NPS Thesis
- Miller, Bachmann, Dutton, Norlander
- Links
- MV4473 Avatar Web (AvatarsOverview.htm)
- Other Web Sites (resourceLinks.htm)
39Questions