Modelling%20Synergies%20in%20Large%20Human-Machine%20Networked%20Systems - PowerPoint PPT Presentation

About This Presentation
Title:

Modelling%20Synergies%20in%20Large%20Human-Machine%20Networked%20Systems

Description:

Research Team: CMU: K. Sycara, C. Lebiere, P. Scerri Cornell: M. Campbell George Mason: R. Parasuraman MIT: J. How, M. Cummings U of Pittsburgh: M. Lewis – PowerPoint PPT presentation

Number of Views:96
Avg rating:3.0/5.0
Slides: 28
Provided by: DanJ184
Learn more at: http://www.cs.cmu.edu
Category:

less

Transcript and Presenter's Notes

Title: Modelling%20Synergies%20in%20Large%20Human-Machine%20Networked%20Systems


1
Modelling Synergies in Large Human-Machine
Networked Systems
  • Research Team
  • CMU K. Sycara, C. Lebiere, P. Scerri
  • Cornell M. Campbell
  • George Mason R. Parasuraman
  • MIT J. How, M. Cummings
  • U of Pittsburgh M. Lewis

2
Research Goals
  • Develop validated theories and techniques to
    predict behavior of large-scale, networked
    human-machine systems involving unmanned vehicles
  • Model human decision making efficiency in such
    networked systems
  • Investigate the efficacy of adaptive automation
    to enhance human-system performance

3
State of the Art Limitations
  • No effective models of large scale interactions
    of human-agent systems, especially intelligent
    information processing agents and decision-making
    humans.
  • Human-in-the-loop experiments to gather data
  • Abstract models to capture important effects but
    still remain manageable
  • High fidelity cognitive models for single human,
    not interacting with other humans.
  • These models must be extended to account for
    impacts of interaction.
  • No established methods for effective model
    abstraction
  • A process of abstraction and validation needs to
    be designed to promote confidence in the
    abstracted models.

4
Overall Approach
  • To cut the Gordian knot of tractability vs.
    fidelity, our modeling methodology is multi-level
    (increasing abstraction)
  • human-in-the-loop
  • high-fidelity cognitive models
  • large scale simulation
  • high level abstractions

5
Approach
  • Human-in-the-loop perform controlled experiments
    with human decision-makers in small group
    settings.
  • The resulting data will be used to develop,
    constrain and validate high-fidelity cognitive
    models of the various operational and
    decision-making roles.
  • High-fidelity models will be validated in the
    same small-group settings as the human
    experiments, then used to generate predictions in
    larger but still tractable simulations (on the
    order of dozens of entities).
  • Results from those simulation runs will be used
    to develop intelligent agents with similar
    characteristics as the high-fidelity cognitive
    models but more computationally tractable.
  • Large scale simulations will be validated
    against both human experimental data and
    cognitive model simulations and will be used in
    larger (on the order of hundreds of entities),
    settings to generate new predictions.
  • The dynamics of the level 3 simulations will be
    abstracted in models of high abstractions (e.g.
    statistical mechanics models) that can be used to
    describe the dynamics of systems involving many
    thousands of entities.
  • We will insert one or two humans or a small
    number of high fidelity models in larger
    simulations to provide additional
    cross-validation.

6
Self-correcting Process
7
Research Strategy
  • Human interaction with unmanned vehicles (UVs)
    this domain incorporates the aspects of
    information fusion, interaction with automation,
    coordination, planning, and command
  • In our proposed studies we will systematically
    examine the effects of complexity on system
    performance, by varying such factors as the
    number and types of UVs, the amount of sensor
    information, and the number of nodes in the
    network.

8
Tasks
  • Information processing and fusion
  • Change in model performance as function of change
    in network topology
  • Tradeoffs between delivering information and
    overwhelming attentional capacity (shallow
    hierarchies)
  • Planning and scheduling tasks
  • Robust scaling in human performance but
    deviations from optimum
  • Robustness to environmental change
  • Management and control of human assets
  • Integration of general and specialized skills

9
Issues
  • How to use human data to develop, parameterize
    and validate high-fidelity cognitive models
    without the level of experimental control
    typically achieved in the laboratory?
  • How to account for large expected variability in
    the data resulting from the combination of
    complex dynamic tasks and individual differences
    in ability, knowledge and strategy?
  • What aspects of the high-fidelity cognitive
    models and machine models are abstractable in
    computational agents and/or mathematical
    formulas?
  • What characteristics of the cognitive model must
    be retained?
  • adaptivity and learning are essential
    characteristics of human performance that should
    be preserved in abstract agents or formulations.
  • What machine (e.g. UAV) characteristics must be
    retained?

10
Issues
  • Understand scaling properties of cognitive
    performance
  • Most experiments look at a single performance
    point rather than as a function of problem
    complexity, time pressure, etc
  • Key component in abstracting performance at
    higher levels
  • Understand interaction between humans and
    machines
  • Most experiments study and model human
    performance under a fixed scenario that misses
    key dynamics of interaction
  • Key aspect of both system robustness and
    vulnerabilities
  • Understand generality and composability of
    behavior
  • Almost all models are developed for specific
    tasks rather than assembling larger pieces of
    functionality from basic pieces
  • Key enabler of scaling models and abstracting
    their properties

11
Issue 1 Scaling Properties
  • Study human performance at multiple complexity
    points
  • Vary problem complexity (e.g. number of targets)
  • Vary information complexity (e.g. target
    characteristics)
  • Vary network topology (e.g. number of related
    partners)
  • Vary rate of change of environment (e.g.
    appearance or disappearance of targets, weather,
    network topology)
  • Quantify impact on all measures of performance
  • Direct performance (number of targets handled,
    etc)
  • Situation awareness (various levels, memory-based
    measures)
  • Workload (both self-reporting and physiological
    measures)

12
Issue 2 Dynamic Interaction
  • Main problem is degrees of freedom between
    multiple Decision Makers
  • Methodology has been developed to model
    multi-agent interactions in games, logistics
    (supply chain) problems
  • Develop baseline model to capture first-order
    behavior
  • Replace most HITL with baseline model(s)
  • Refine model based on greater data accuracy
  • Methodology can be extended to multiple levels of
    our approach, each time abstracting to next level
    in hierarchy
  • Impact of fixed, flexible or emergent network
    organization
  • Vary degree of task integration (e.g. planning
    across boundaries)
  • Vary level of information sharing (e.g. through
    interface screens)
  • Accurate cognitive models for human-machine
    interaction
  • Adaptive interfaces (e.g. to predicted model
    workload)
  • Model-based autonomy (eg. handle monitoring,
    routine decision-making)

13
Issue 3 Behavior Abstraction
  • First two issues build solutions toward this one
  • Study of scaling properties helps capture
    response function for all aspects of target
    behavior
  • Abstraction methodology helps iterate and test
    models at various levels of abstraction to
    maximize retention
  • Issues
  • Grain scale of components (generic tasks, unit
    tasks?)
  • Attainable degree of fidelity at each level?
  • Capture individual differences or average,
    normative behavior?
  • Latter may miss key interaction aspects outliers
  • Individual differences as architectural
    parameters (WM, speed)
  • Use cognitive model to generate data to train
    machine learning agent

14
Research Issues for Humans-UV Teams
  • Humans coupled to UVs
  • Too many problem parameters need a more natural
    language for specifying the desired goals, which
    can then be re-interpreted into an optimization
    problem to be solved
  • Humans coupled to information consensus
  • Used to ensure consistency in the situational
    awareness of the team
  • Computational intensive filtering operation
  • Human operator has important information to
    provide in the form of target classifications or
    data from other sources
  • How best to convey to operator the uncertainty in
    the current SA
  • Combined with potential target value, it gives
    some sense of what should be addressed
  • How can operator codify uncertainty in new
    information inserted in the algorithm.

15
Controlling Robot Teams
  • How can teams of people control teams of UVs of
    increasing size?
  • What is the density of UVs a human(s) can
    control?
  • What kinds of command are possible for a
    particular density?
  • Issue As the complexity of the networked system
    increases, e,g, scaling up in numbers, system
    performance may degrade because the cognitive
    capacities of individual human supervisors will
    be exceeded.
  • Simply increasing the number of operators may not
    solve the problem due to increased coordination
    demands between hybrid team members

16
Real Simulated Testbeds
17
Evaluation Measures
  • Develop predictive measures for human-UV teams,
    which include
  • Efficiency with which operators and teams of
    operators allocate their attention to the UVs
    Efficiency with which an operator (and the system
    in general) allocates tasks
  • Costs associated with one or more operators
    switching between UV tasks
  • Impact of constraints such as information flow
    rates and decision-making rates
  • Models of how well operators collaborate with the
    UVs and other operators, planners, humans within
    the overall command structure and battlefield
  • Characteristics and limitations of system
    robustness in terms of unexpected failures and
    unanticipated inputs
  • Mechanisms for providing conformance monitoring
    feedback (e.g., indicate potential of decision
    making errors by a human operator to a
    higher-level commander).
  • These data will be used by cognitive modelers in
    the project to develop models capturing the
    coupling between the human operators and
    automated planners and its effect on performance.

18
Research Evaluation
  • Evaluation will be done at each level
  • Varying the task
  • Varying the number of actors in the system
  • Varying the proportion of humans and robots
  • Varying the level of cognitive and machine agent
    fidelity
  • Evaluation will be done across levels
  • Devise and test abstractions that robustly map
    across models by retaining key characteristics
  • Take small number of entities from lower (high
    fidelity level) and test higher level for
    compliance of input/output behavior

19
Example Evaluation Across Levels 2 and 3
  • Infeasible to use the cognitive models in any
    large scale tests.
  • Small subset of the nodes will be extracted from
    the large scale simulation with the input and
    output from those nodes generated by the
    simulation.
  • The behavior of this subset can be compared
    against the results of experiments with cognitive
    models for verification.

20
Project Plan (1)
  • Develop different scenarios (e.g. information
    dissemination information fusion plan
    generation) that allow
  • hybrid interactions of interest at each level and
  • testing of aggregate system properties of
    interest (e.g. vulnerability stability of
    overall system behavior) at each level
  • Develop algorithms and techniques (e.g.
    appropriate simulation methods) that characterize
    aggregate system behavior at each level
  • Develop methods for mapping fundamental
    characteristics of system components and
    interactions across levels
  • Perform tests under different assumptions of
    system functionality (e.g. X of sensor nodes
    fail network capacity has decreased by Y)

21
Project Plan (2)
  • Design experiments where humans will interact
    with automation in various scenarios, e.g.
    supervisory control of assets, and with one
    another in various decision and information
    sharing tasks. These experiments will attempt to
    quantify types of human capabilities (e.g.
    learning, adaptation, planning) and their
    information processing and perceptual
    limitations.
  • Develop high fidelity cognitive models to capture
    limits of information processing, constraints on
    attention, decision making bottlenecks but also
    incorporate pattern recognition and other
    capabilities such as learning and adaptivity that
    might remediate those shortcomings.
  • Investigate the interaction properties of
    cognitive models as well as their computational
    tractability as the number of models that
    interact with each other increases.
  • Discover appropriate abstractions of cognitive
    models that efficiently capture the basic
    information processing capability of humans. The
    models will capture the input and output
    characteristics of the node, without modeling the
    details of the cognitive processes by which the
    information is processed.

22
Project Plan (3)
  • Test and validate the machine models, cognitive
    models and hybrid models at each level
  • Develop techniques for validating the models
    across levels.
  • Use validated models for producing predictions of
    overall human-machine systems and recommendations
    for the design of future systems and
    organizations.
  • Develop strategies for mitigating performance
    degradation which focus on the use of different
    forms of adaptive or flexible automation
  • Contrast the different automation approaches so
    as to achieve an optimal balance between too much
    flexibility, which can increase cognitive
    workload, and too little, which can increase
    system unpredictability, decrease situation
    awareness

23
Deliverables
  • Validated methodology, based primarily on
    feedback between experiments and simulations at
    different level of granularity, for the study of
    complex human-machine systems
  • Analytical results, for smaller scale problems
    that can serve as islands of validation for the
    larger scale simulations
  • Results on system performance for various
    phenomena of interest, e.g. unpredictable system
    behavior and vulnerability
  • Techniques for principled abstraction of human
    models and their behavior properties and effect
    on system performance.

24
Scientific Significance
  • A key research question in this work is to
    develop a multi-level process that grounds and
    validates the models while allowing broad
    questions to be answered with abstract models.
    The process by which this model is developed will
    be a key research contribution that will improve
    a range of other research programs that are of
    importance to the Air Force
  • The overall four level approach will be a
    publicly available, empirically grounded model of
    large scale human-agent interaction. Thus, the
    overall model will serve as a invaluable resource
    for many other researchers working in this area
  • The work will make specific contributions in
    areas ranging from human-agent interaction to
    statistical mechanics. These contributions will
    be of benefit to a wide audience

25
Potential Applications to DoD
  • NCW offers both new opportunities and the
    possibility of unintended consequences or
    unanticipated changes to human roles. The
    proposed research explores these issues through
  • mathematical models,
  • simulation
  • cognitive models
  • The research will provide
  • Identification of potential bottlenecks,
    challenges to stability, and obstacles to human
    control
  • Solutions to these problems before they are
    encountered in the field

26
Outreach and Research Training of Students
  • 10 graduate students
  • 4 post doctoral fellows
  • Joint seminars at CMU and U. of Pittsburgh will
    be an ongoing activity throughout the duration of
    the project.
  • Results of this research will be incorporated in
    courses taught by the participating PIs.
  • Broader contributions to education will occur
    through scientific meetings, publications, and by
    maintaining a public project web site containing
    products of the research (e.g. code, data).

27
Roles of Team Members
  • Sycara PI, overall technical and management
    responsibility main focus large scale models,
    levels 3 and 4, secondary focus and
    collaborations at level 2
  • Campbell decision models in levels 2 and 3
  • Cummings human-UV modeling in level 1
  • How human-UV automation in level 1, analysis and
    synthesis of hybrid controllers, level 4
  • Lebiere high fidelity cognitive models, levels 1
    and 2
  • Lewis scaling span of control, levels 1 and 2
  • Parasuraman adaptive human-machine automation,
    levels 1 and 2
  • Scerri large scale coordination, levels 3 and 4,
    collaborations at level 2
Write a Comment
User Comments (0)
About PowerShow.com