Visual Servoing Example - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

Visual Servoing Example

Description:

DARPA ITO/MARS Project Update Vanderbilt University A Software Architecture and Tools for Autonomous Robots that Learn on Mission K. Kawamura, M. Wilkes, R. A. Peters ... – PowerPoint PPT presentation

Number of Views:418
Avg rating:3.0/5.0
Slides: 31
Provided by: Mitc68
Category:

less

Transcript and Presenter's Notes

Title: Visual Servoing Example


1
DARPA ITO/MARS Project UpdateVanderbilt
University
A Software Architecture and Tools for Autonomous
Robots that Learn on Mission
K. Kawamura, M. Wilkes, R. A. Peters II, D.
GainesVanderbilt UniversityCenter for
Intelligent Systemshttp//shogun.vuse.vanderbilt.
edu/CIS/IRL/
12 January 2000
2
Vanderbilt MARS Team
  • Kaz Kawamura, Professor of Electrical
    Computer Engineering. MARS responsibility -
    PI, Integration
  • Dan Gaines, Asst. Professor of Computer
    Science. MARS responsibility -
    Reinforcement Learning
  • Alan Peters, Assoc. Professor of Electrical
    Engineering. MARS responsibility -
    DataBase Associative Memory, Sensory EgoSphere
  • Mitch Wilkes, Assoc. Professor of Electrical
    Engineering. MARS responsibility - System
    Status Evaluation
  • Jim Baumann, Nichols Research MARS
    responsibility - Technical Consultant
  • Sponsoring Agency
  • Army Strategic Defense Command

3
A Software Architecture and Tools for Autonomous
Mobile Robots That Learn on Mission
NEW IDEAS
GRAPHIC
Learning with a DataBase Associative
Memory Sensory EgoSphere Attentional
Network Robust System Status Evaluation
Demo III
SCHEDULE
IMPACT
Mission-level interaction between the robot and a
Human Commander. Enable automatic acquisition of
skills and strategies. Simplify robot training
via intuitive interfaces - program by example.
4
Project Goal
  • Develop a software control system for autonomous
    mobile robots that can
  • accept mission-level plans from a human
    commander,
  • learn from experience to modify existing
    behaviors or to add new behaviors, and
  • share that knowledge with other robots.

5
Project Approach
  • Use IMA, to map the problem to a set of agents.
  • Develop System Status Evaluation (SSE) for self
    diagnosis and to assess task outcomes for
    learning.
  • Develop learning algorithms that use and adapt
    prior knowledge and behaviors and acquire new
    ones.
  • Develop Sensory EgoSphere, behavior and task
    descriptions, and memory association algorithms
    that enable learning on mission.

6
MARS Project The Robots
ATRV-Jr.
ISAC
HelpMate
7
The IMA Software Agent Structure of a Single Robot
8
Robust System Status Analysis
  • Timing information from communication between
    components and agents will be used.
  • Timing patterns will be modeled.
  • Deviations from normal indicate discomfort.
  • Discomfort measures will be combined to provide
    system status information.

9
What Do We Measure?
  • Visual Servoing Component
  • error vs. time
  • Arm Agent
  • error vs. time, proximity to unstable points
  • Camera Head Agent
  • 3D gaze point vs. time
  • Tracking Agent
  • target location vs. time
  • Vector Signals/Motion Links
  • log when data is updated

10
(No Transcript)
11
Commander Interface
12
Commander Interface
13
Commander Interface
14
Obstacle Avoidance
15
Planning/Learning Objectives
  • Integrated Learning and Planning
  • learn skills, strategies and world dynamics
  • handle large state spaces
  • transfer learned knowledge to new tasks
  • exploit a priori knowledge
  • Combine Deliberative and Reactive Planning
  • exploit predictive models and a priori knowledge
  • adapt given actual experiences
  • make cost-utility trade-offs

16
Overview of Approach
17
Example Different Terrains
18
Generate Abstract Map
  • Nodes selected based on learned action models
  • Each node represents a navigation skill

19
Generate Plan in Abstract Network
  • Plan makes cost-utility trade-offs
  • Plans updated during execution

20
Planning/Learning Status
  • Action Model Learning
  • adapted MissionLab to allow experimentation
    (terrain conditions)
  • using regression trees to build action models
  • Plan Generation
  • developed prototype Spreading Activation Network
  • using to evaluate potential of SAN for plan
    generation

21
Role of ISAC in MARS
ISAC is a testbed for learning complex,
autonomous behaviors by a robot under human
tutelage.
  • Inspired by the structure of vertebrate brains
  • a fundamental human-robot interaction model
  • sensory attention and memory association
  • learning sensory-motor coordination (SMC)
    patterns
  • learning the attributes of objects through SMC

22
System Architecture
23
Next Up Peer Agent
  • We are currently developing the peer agent.
  • The peer agent encapsulates the robots
    understanding of and interaction with other
    (peer) robots.

24
System Architecture High Level Agents
Due to the flat connectivity of IMA primitives,
all high level agents can communicate directly if
desired.
25
Robot Learning Procedure
  • The human programs a task by sequencing component
    behaviors via speech and gesture commands.
  • The robot records the behavior sequence as a
    finite state machine (FSM) and all sensory-motor
    time-series (SMTS).
  • Repeated trials are run. The human provides
    reinforcement feedback.
  • The robot uses Hebbian learning to find
    correlations in the SMTS and to delete spurious
    info.

26
Robot Learning (contd)
  • The robot extracts task dependent SMC info from
    the behavior sequence and the Hebbian-thinned
    data.
  • SMC occurs by associating sensory-motor events
    with behaviors nodes in the FSMs.
  • The FSM is transformed into a spreading
    activation network (SAN).
  • The SAN becomes a task record in the database
    associated memory (DBAM) and is subject to
    further refinements.

27
Human Agent Human Detection
28
Human Agent Recognition
29
Human Agent Face Tracking
30
Schedule
Write a Comment
User Comments (0)
About PowerShow.com