NSS NSB Brief - PowerPoint PPT Presentation

1 / 15
About This Presentation
Title:

NSS NSB Brief

Description:

Simulation Sciences Division (SSD) Progress in Using Entity-Based Monte Carlo Simulation With Explicit Treatment of C4ISR to Measure IS Metrics – PowerPoint PPT presentation

Number of Views:67
Avg rating:3.0/5.0
Slides: 16
Provided by: BillS179
Learn more at: http://www.dodccrp.org
Category:
Tags: nsb | nss | brief | loop | ooda

less

Transcript and Presenter's Notes

Title: NSS NSB Brief


1
http//www.metsci.com
Simulation Sciences Division (SSD)
Progress in Using Entity-Based Monte Carlo
Simulation With Explicit Treatment of C4ISR to
Measure IS Metrics
Prepared by Dr. Bill Stevens, Metron for IS
Metrics Workshop 28-29 March 2000
Corporate Headquarters 11911 Freedom Drive Suite
800 Reston, VA 20190-5602 (703)787-8700 (703)787-3
518 (FAX)
Simulation Sciences Division 512 Via de la
Valle Suite 301 Solana Beach, CA
92075-2715 (858)792-8904 (Voice) (858)792-2719
(FAX)
2
OUTLINE
  • Approach
  • Key Metrics Related Details
  • Basic Monte Carlo Metrics and Statistics
  • Cause-and-Effect Analysis
  • Sensitivity Analysis
  • Hypothesis Testing
  • Examples
  • CINCPACFLT IT-21 Assessment
  • FBE-D
  • Lessons-Learned and Challenges

3
Entity-Based Monte Carlo Simulation with Explicit
C4ISR
  • Provides one means to directly measure relevant
    IS metrics in mission-to-campaign level
    scenarios. Assess impact of IT and WPR
    improvements on warfighting outcome.
  • Explicit C4ISR includes representation of
  • Platforms, systems, and commanders,
  • Command organization (group, mission, platform),
  • Commanders plans and doctrine,
  • Information collection,
  • Information dissemination,
  • Tactical picture processing, and
  • Warfighting interactions.
  • Provides means to capture, simulate/view, and
    quantify the performance of alternate C4ISR
    architectures and warfighting plans.

4
Key Metrics Related DetailsBasic Metrics and
Statistics
  • Typical Monte Carlo metrics are random variables
    X which are computed for each replication (Xn
    value in replication n). Examples
  • Percent of threat subs tracked/trailed/killed on
    D10,
  • Average threat sub AOU on D0, etc.
  • Three key quantities should be computed for each
    X

5
Key Metrics Related DetailsCause-and-Effect
Analysis
Relate Data Recorded Above to Force Effectiveness
Metrics - Force Attrition and Damage, Resources
Expended, and Commanders Objectives Attained.
6
Key Metrics Related DetailsExcursion Analysis
  • Monte Carlo runs can be organized in the form of
    a scenario baseline scenario excursion sets
    selected metrics and metric breakdowns. Example
    excursion sets
  • SA-10 Pks 0.0, 0.2, 0.4, 0.6
  • CV-68 VA Squadron squadron-x, squadron-y,
    squadron-z
  • Resulting excursion set sensitivity graphs can be
    generated

Squadron X
Number of BLUE Fighters Killed
Squadron Y
Squadron Z
Pk
7
Key Metrics Related DetailsHypothesis Testing
  • Many typical study objectives can be addressed
    through the use of statistical hypothesis
    testing.
  • As an example, one could employ hypothesis
    testing to test H0 vs. H1
  • H0 mX gt mY
  • H1 mX lt mY
  • and to thus determine whether or not squadron X
    is statistically more or less survivable that
    squadron Y for given SAM configuration.
  • Standard tests can be applied as a function of
    (a,b) where a(b) probability of falsely
    rejecting(accepting) H0.

8
ExamplesCINCPACFLT IT-21 Assessment
Simulation revealed that IT-21 ground picture
would have much improved ID rate

9
ExamplesCINCPACFLT IT-21 Assessment
On-the-fly ATO concept was proposed to leverage
the improved ID rates

10
ExamplesCINCPACFLT IT-21 Assessment
Combined IT and process improvements yield
speed-of-command and commanders attrition goal
timeline improvements

11
ExamplesFleet Battle Experiment Delta (FBE-D)
The MBC/C7F hypothesized that distributed surface
picture management and distributed
localization/prosecution asset allocation,
leveraging planned IT-21 improvements, would
result in significant improvements in CSOF
mission effectiveness
12
ExamplesFleet Battle Experiment Delta (FBE-D)
MS was employed to model the CSOF threat and
US/ROK surveillance, localization,
and prosecution assets. Live operators
interacted with the simulation by making
surveillance, localization, and prosecution asset
allocations. These asset allocations were fed
into the simulation in order to provide operator
feedback and for the purpose of assessing the
effectiveness of the experimental distributed C2
architecture.
13
ExamplesFleet Battle Experiment Delta (FBE-D)
A novel live operator-to-simulation voice and GUI
based approach was employed to effect the desired
virtual experimentation environment. Pictured
here is the air asset interface ...
14
ExamplesFleet Battle Experiment Delta (FBE-D)
The FBE-D distributed C2 architecture plus new
in-theater attack asset capabilities yielded the
surprise result that the assessed CSOF threat
could be countered in Day 01 of the Korean War
Plan. Post-analysis, pictured below, was
employed to assess the sensitivity of this result
to different force laydowns.
15
Lessons-Learned and Challenges
  • Lessons-Learned
  • C4ISR architectures and C2 decision processes can
    be explicitly represented at the commander,
    platform, and system levels. Detailed
    alternatives can be explicitly represented and
    assessed.
  • Simulation supports detailed observation of C4ISR
    architecture in n-sided campaign and mission
    level scenarios.
  • Facilitates/forces community to think through
    proposed C4ISR architectures.
  • ID of key performance drivers and assessment of
    warfighting impact of technology initiatives
    using Monte Carlo simulation is feasible.
  • Challenges
  • Detailed C4ISR assessments require consideration
    of nearly all details associated with planning
    and executing a C4ISR exercise or experiment.
  • Collection of valid platform, system, and (in
    particular) C2 data and assumptions for friendly
    and threat forces is an issue.
  • Campaign-level decisions (e.g. determine
    commanders objectives) not easily handled.
  • Scenarios in which major re-planning (e.g. modify
    commanders objectives) is warranted are not
    easily handled.
  • Execution times limit the analyses which can be
    reasonably performed.
Write a Comment
User Comments (0)
About PowerShow.com