Title: IP Performance Metrics: Metrics, Tools, and Infrastructure
1IP Performance MetricsMetrics, Tools, and
Infrastructure
- Guy Almes
- January 30, 1997
2Outline
- Background
- IETF IPPM effort
- IPPM Activities outside IETF Scope
- Technical Approaches
- One Example of Measurement Infrastructure
3Background
- Internet topology is increasingly complex
- Load grows (even) faster than capacity
- The relationship among networks is increasingly
competitive - Result users dont understand the Internets
performance and reliability
4IP Performance Metrics Objective
- enable users and service providers (at all
levels) to have an accurate common understanding
of the performance and reliability of paths
through the Internet.
5Example Internet Topology
6IETF IPPM Effort
- BOF Apr-95 at Danvers
- Within Operational Requirements / Benchmarking
Methodology WG - Initial Meeting Jul-95 at Stockholm
- Framework Document Jun-96
- Definitions of specific metrics Dec-96
7Jun-96 Framework Document
- Importance of careful definition
- Good properties related to measurement
methodology - Avoidance of bias and of artificial performance
goals - Relationship to dynamics between users and
providers
8Terminology for Paths and Clouds
- host, link, and router
- path lt h0, l1, h1, , ln, hn gt
- subpath
- cloud graph of routers connected by links
- exchange links that connect clouds
- cloud subpath lt hi, li1, , lj, hj gt
- path digest lt h0, e1, C1, , Cn-1, en, hn gt
9Three Fundamental Concepts
- Metric a property of an Internet component,
carefully defined and quantified using standard
units. - Measurement Methodology a systematized way to
estimate a metric. There can be multiple
methodologies for a metric. - Measurements the result of applying a
methodology. In general, a measurement has
uncertainties or errors.
10Metrics and the Analytical Framework
- The Internet has a rich analytical framework
(A-frame) - There are advantages to any notions described
using the framework - can leverage A-frame results
- have some hope of generality, scaling
- Well specify metrics in A-frame terms when
possible
11Such metrics are called analytically-specified
metrics
- Examples
- Propagation time of a link
- Bandwidth of a link
- Minimum bandwidth along a path
- The introduction of an analytical metric will
often require the refinement of A-frame concepts - These refinements form an hierarchy
12Empirically-specified Metrics
- Some key notions do not fit into the A-frame
- Example flow capacity along a path while
observing RFC-1122 congestion control - The only realistic way to specify such a metric
is by specifying a measurement methodology (cf.
treno)
13Measurement Strategies
- Active vs Passive measurements
- Hard vs Soft degree of Cooperation
- Single Metric with multiple methodologies
14Two Forms of Composition
- Spatial Composition
- e.g., Delay metric applied to router vs path vs
subpath - Temporal Composition
- e.g., Delay metric at T compared to delay at
times near T
15Progress at the San Jose IETF
- Dec-96
- One-way Delay
- Flow capacity
- Availability
- Revisions to the Framework Document
16Framework Revisions
- Clock Issues
- Synchronization, Accuracy, and Resolution
- Singletons, Samples, and Statistics
- Generic Type P Packets
17Motivation of One-way Delay
- Minimum of delay transmission/propagation delay
- Variation of delay queueing delay
- Large delay makes sustaining high-bandwidth flows
harder - Erratic variation in delay makes real-time apps
harder
18SingletonType-P-One-way-Delay
- (src, dst, T, path)
- either undefined or a duration
- undefined taken as infinite
- duration in seconds
19Sample Type-P-One-way-Delay-Stream
- (src, dst, first-hop, T0, Tf, lambda)
- sequence of ltT, delaygt pairs
- Poisson process (lambda) used to generate T values
20Statistics
- Minimum of a Sample
- Percentile
- Median
21Measurement Technologies
- Active Tests
- Passive Tests
- Advantages/disadvantages of each
- Policy implications of each
22Active Tests
- Both at edge and at/near exchange points
- Extra traffic
- No eavesdropping
- Delay, packet loss, throughput across specific
clouds
23Example of Active Tests
24Active Test Results
25Passive Tests
- Only at the edge -- privacy caution
- No extra traffic
- Throughput
- Also, non-IPPM measurements on nature of use
26Example of Passive Tests
27Passive Test Results
28The Surveyor Infrastructure
- Collaborating organizations
- Advanced Network Services
- (23) Common Solutions Group universities
- Active tests
- ongoing tests of delay, packet loss
- occasional tests of flow capacity
- Passive tests
- some tests to characterize Internet use
29The Surveyor Infrastructure
- Key ideas
- database / web server to receive results
- use of GPS to synchronize clocks
- need for measurement machines both at campuses
and at/near exchange points
30Database/Web Server
- Measurement machines upload their results to
the database server - These results are stored so that queries can be
made at a later time - Users interrogate the server using ordinary web
browsers - These interrogations are analyzed using the
database and the results returned to the user via
the Web
31Surveyor Infrastructure
32Ongoing Tests
33Uploading Results
34Reporting Analysis
35Policy Implications for Asia / Pacific
- Better understanding of cost vs performance
tradeoffs - Cooperation among users / providers
- Sharing of test results
- Value of cooperating in sharing results even of
imperfect tests - Value of supporting very accurate NTP at
exchanges and campuses