RTI Evaluation Tool Julian Chernick Gina Oesterreich - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

RTI Evaluation Tool Julian Chernick Gina Oesterreich

Description:

Attribute set size. Transportation type. Number of objects. Time Management ... rate at which attribute ownership transfers can occur between two federates per second. ... – PowerPoint PPT presentation

Number of Views:61
Avg rating:3.0/5.0
Slides: 22
Provided by: joyceam
Category:

less

Transcript and Presenter's Notes

Title: RTI Evaluation Tool Julian Chernick Gina Oesterreich


1
RTI Evaluation ToolJulian ChernickGina
Oesterreich
2
RTI Evaluation Tool Overview
  • Ascertain performance information on RTIs to
    select the RTI that best meets the performance
    needs of the simulation.
  • Development of a Test Bed to evaluate multiple
    implementations of the High Level Architecture
    (HLA) Interface Specification, version I.3.
  • Focus on RTTCs Infrared Tactical Missile
    Simulation by utilizing its FOM and
    representative parameter levels.
  • RTI 1.3NG was evaluated.

3
RTI Evaluation Tool Overview (cont.)
  • Specific Tasks
  • Metrics definition
  • Test Methodology definition
  • Results Format definition
  • Test Plan
  • RTI Evaluation Tool Development
  • Evaluation of RTI 1.3NG and Test Report
  • RTI Evaluation Tool Final Report.
  • Test Bed
  • Three Windows NT 4.0 PCs
  • Locally isolated 10 Mbits/sec Ethernet LAN
  • Federation Execution Planners Workbook

4
Metrics Definition
  • Performance Metrics
  • Attribute Update Latency
  • Attribute Update Throughput
  • Network Overhead
  • Object Registration Throughput
  • Attribute Ownership Transfer Throughput
  • Time Advance Grant Throughput
  • DMSO Benchmark Programs

5
RTI Evaluation Tool Development
  • Maximum reuse of Benchmark programs
  • Incorporation of RTTC FOM into the RTI Evaluation
    Tool
  • RTTC FOM was modified to add one object and
    several interactions required by the Benchmark
    test federates
  • This new FOM was used to generate the FED files
    that were used by the RTI during the evaluation
    of RTI 1.3NG

6
RTI Evaluation Tool Development (cont.)
  • Attribute Update Latency
  • Elapsed time starting immediately before the
    Update Attribute Values(UAV) service call and
    ending when the corresponding Reflect Attribute
    Values (RAV) call is executed on a receiving
    federate.
  • Parametric variations
  • Attribute set size
  • Transportation type
  • Number of objects
  • Time Management
  • Modifications to Latency Benchmark
  • Transportation Type
  • Time Management

7
RTI Evaluation Tool Development (cont.)
  • Attribute Update Throughput
  • The maximum rate at which federates can
    exchange attribute updates per second with a zero
    percent message loss.
  • Parametric variations
  • Attribute set size
  • Transportation type
  • Attribute Update Methodology
  • Time Management
  • Modifications to Throughput Benchmark
  • Transportation Type
  • Time Management
  • Attribute Update Methodology

8
RTI Evaluation Tool Development (cont.)
  • Object Registration Throughput
  • The maximum rate at which a federate can
    register objects per second.
  • Parametric variations
  • Number of federates
  • Number of objects
  • Used the modified Throughput Benchmark
  • No additional modifications required for this test

9
RTI Evaluation Tool Development (cont.)
  • Attribute Ownership Transfer Throughput
  • The maximum rate at which attribute ownership
    transfers can occur between two federates per
    second.
  • Parametric variations
  • Number of ownership transfers
  • Modifications to Ownership Benchmark
  • DMSO Ownership Benchmark limitation
  • Not modified RTTC has indicated that this is not
    a high priority performance measure for their
    current application.
  • Unmodified Ownership Benchmark used for RTI 1.3NG
    evaluation.

10
RTI Evaluation Tool Development (cont.)
  • Time Advance Grant Throughput
  • The maximum rate at which a federation can
    grant time advancements, in seconds.
  • Parametric variations
  • Number of federates
  • Modifications to Time Advance Benchmark
  • None required.

11
RTI Evaluation Tool Development (cont.)
  • Network Overhead
  • A measure of the traffic on the network other
    than that requested by the user through the RTI
    Update Attribute Values(UAV) service call.
  • Parametric variations
  • Transportation type
  • Attribute Update Methodology
  • Time Management
  • Additional Modification to the Throughput
    Benchmark
  • Isolate UAV() service call to capture relevant
    network packets.
  • EtherPeek used to capture network traffic

12
RTI 1.3NG Test Results
  • Total of 65 tests run against RTI 1.3NG
  • 49 tests provided good results
  • 13 tests failed
  • Update Latency failed for number of objects
    1000 (11 tests)
  • Update Latency failed for number of objects
    100, attribute set size 2048 when using Best
    Effort transportation (1 test)
  • Object Registration failed for objects 10 with
    2 federates (1 test)
  • 3 tests provided unexpected results
  • Network overhead results are inconclusive

13
RTI 1.3NG Test Results (continued)
14
RTI 1.3NG Test Results (continued)
15
RTI 1.3NG Test Results (continued)
16
RTI 1.3NG Test Results (continued)
17
RTI 1.3NG Test Results (continued)
18
RTI 1.3NG Test Results (continued)
19
Lessons Learned
  • Benchmarks not well documented
  • Benchmarks not designed with substituting
    simulation data in mind
  • Update Latency bug
  • Object Registration bug
  • Sporadic errors during some tests
  • Use of tick()

20
Possible RTI Evaluation Tool Extensions
  • Evaluation of the Mäk RTI
  • Number of federates (greater than 2)
  • Number of federations (greater than 1)
  • Effect of late arriving federate on update rate
  • Object interaction latency and throughput
  • Effect of federation complexity on latency
  • Modifications to Throughput Benchmark for
    attribute update methodology.

21
Possible RTI Evaluation Tool Extensions
  • Modifications to Ownership Benchmark to obtain a
    more meaningful measurement
  • Measure throughput on a WAN, across a router or
    with multicast.
  • Follow-On analysis to explain complex trends
  • Sporadic errors
  • Test Failures
  • Local minima in the test results
  • Network overhead results
Write a Comment
User Comments (0)
About PowerShow.com