Title: Network Performance Analysis Strategies
1Network Performance Analysis Strategies
- Dr Shamala Subramaniam
- Dept. Communication Technology Networks
- Faculty of Computer Science IT, UPM
- e-mail shamala_at_fsktm.upm.edu.my
2Overview of Performance Evaluation
- Intro Objective
- The Art of Performance Evaluation
- Professional Organizations, Journals, and
conferences. - Performance Projects
- Common Mistakes and How to Avoid Them
- Selection of Techniques and Metrics
3Why ?
4(No Transcript)
5Intro Objective
- Performance is a key criterion in the design,
procurement, and use of computer systems. - Performance ?? Cost
- Thus, computer systems professionals need the
basic knowledge of performance evaluation
techniques.
6Intro Objective
- Objective
- Select appropriate evaluation techniques,
performance metrics and workloads for a system. - Conduct performance measurements correctly.
- Use proper statistical techniques to compare
several alternatives. - Design measurement and simulation experiments to
provide the most information with least effort. - Perform simulations correctly.
7Modeling
- Model used to describe almost any attempt to
specify a system under study. - Everyday connotation
- physical replica of a system.
- Scientific a model is a name given to a
portrayal of interrelationships of parts of a
system in precise terms. The portrayal can be
interpreted in terms of some system attributes
and is sufficiently detailed to permit study
under a variety of circumstances and to enable
the system s future behavior to be predicted.
8Usage of Models
- Performance evaluation of a transaction
processing system (Salsburg, 1988) - A study of the generation and control of forest
fires in California (Parks, 1964) - The determination of the optimum labor along a
continuous assembly line in a factory (Killbridge
and Webster, 1966) - An analysis of ship boilers (Tysso, 1979)
9A Taxonomy of Models
- Predictability
- Deterministic all data and relationships are
given in certainty. Efficiency of an engine based
on temperature, load and fuel consumption. - Stochastic - at least some of the variables
involved have a value which is made to vary in an
unpredictable or random fashion. Example
financial planning. - Solvability
- Analytical simple
- Simulation complicated or an appropriate
equation cannot be found.
10A Taxonomy of Models
- Variability
- Whether time is incorporated into the model
- Static specific time (financial)
- Dynamic any time value (food cycle)
- Granularity
- Granularity of their treatment in time.
- Discrete events clearly some events (packet
arrival) - Continuous models impossible to distinguish
between specific events taking place (trajectory
of a missile).
11The Art of Performance Modeling
- There are 3 ways to compare performance of two
systems - Table 1.1
- System Workload 1 Workload 2 Average
- A 20 10
15 - B 10 20
15
12The Art of Performance Modeling (cont.)
- Table 1.2 System B as the Base
- System Workload 1 Workload 2 Average
- A 2 0.5
1.25 - B 1 1
1
13The Art of Performance Modeling (cont.)
- Table 1.3 System A as the Base
- System Workload 1 Workload 2 Average
- A 1 1
1 - B 2 0.5
1.25
14The Art of Performance Modeling (cont.)
15Performance Projects
I hear and forget. I see and I remember. I do and
I understand Chinese Proverb
16Performance Projects
- The best way to learn a subject is to apply the
concepts to a real-system - The project should encompass
- Select a computer sub-system a network
congestion control, security, database, operating
systems. - Perform some measurements.
- Analyze the collected data.
- Simulate AND Analytically model the subsystem
- Predict its performance
- Validate the Model.n
17Professional Organizations, Journals and
Conferences
- ACM Sigmetrics Association of Computing
Machinerys. - IEEE Computer Society The Institute of
Electrical and Electronic Engineers (IEEE)
Computer Society. - IASTED The International Association of Science
and Technology for Development
18Common Mistakes and How to Avoid Them
- No Goals
- Biased Goals
- Unsystematic Approach
- Analysis without understanding The Problem
- Incorrect Performance Metrics
- Unrepresentative Workloads
- Wrong Evaluation Techniques
- Overlooking Important Parameters
- Ignoring Significant Factors
19Common Mistakes and How to Avoid Them
- Inappropriate Experimental Design
- Inappropriate Level of Detail
- No Analysis
- Erroneous Analysis
- No Sensitivity Analysis
- Ignoring Errors in Input
- Improper Treatment of Outliers
- Assuming No Change in the Future
- Ignoring Variability
20Common Mistakes and How to Avoid Them
- Too Complex Analysis
- Improper Presentation of Results
- Ignoring Social Aspects
- Omitting Assumptions and Limitations.
21A Systematic Approach
- State Goals and Define the System
- List Services and Outcomes
- Select Metrics
- List Parameters
- Select Factors to Study
- Select Evaluation Technique
- Select Workload
- Design Experiments
- Analyze and Interpret Data
- Present Results
22Selection of Techniques and Metrics
23Overview
- Key steps in performance evaluation technique
- Selecting evaluation technique
- Selecting a metric
- Performance metrics
- Problem of specifying performance requirements
24Selecting an evaluation technique
- Three techniques
- Analytical modeling
- Simulation
- Measurement
25Criteria for selection Life-cycle stage
- Measurements are possible only if something
similar to the proposed system already exists. - For a new concept, analytical modeling and
simulation are the only techniques from which to
choose. - It is more convincing if analytical modeling and
simulation is based on previous measurement.
26Criteria for selection Time required
- In most situations, results are required
yesterday. Then analytical modeling is probably
the only choice. - Simulations take long time
- Measurements take longer than analytical
modeling. - If any thing go wrong, measurement will.
- So time required for measurement varies.
27Criteria for selection Availability of tools
- Tools include modeling skills, simulation
languages, and measurement instruments. - Many performance analysts are skilled in
modeling. They would not touch real system at any
cost. - Others are not as proficient in queuing theory
and prefer to measure or simulate. - Lack of knowledge of the simulation languages
and techniques keeps many analysts away from
simulations.
28Criteria for selection Level of accuracy
- Analytical modeling requires so many
simplifications and assumptions. - Simulations can incorporate more details and
require less assumptions than analytical modeling
and are often close to reality.
29Criteria for selection Level of accuracy (cont.)
- Measurements may not give accurate results
simply because many of the environmental
parameters such as system configuration, type of
workload, and time of measurement and so on. - So, the accuracy of results can vary from very
high to none with measurement techniques. - Note that, level of accuracy and corectness of
conclusions are not identical.
30Criteria for selection Trade-off evaluation
- Goal of performance study compare different
alternatives or to find the optimal parameter
value. - Analytical models generally provide the best
insights into the effects of various parameters
and their interactions.
31Criteria for selection Trade-off evaluation
- With simulations it is possible to search the
space of parameter values for the optimal
combination. - Measurement is least desirable technique in this
respect.
32Criteria for selection Cost
- Measurement requires real equipment, instruments,
and time. It is most costly of the three
techniques. - Cost is often the reason of simulating complex
systems. - Analytical modeling requires only paper and
pencils. Analytical modeling is the cheapest
technique. - Can be decided based on cost allocated to the
project.
33Criteria for selection Saleability
- Convincing others is important.
- It is easy to convince with real measurement.
- Most people are skeptical of analytical
measurements, because they do not understand the
techniques.
34 35Criteria for selection Saleability (cont.)
- So validation with other technique is important.
- Do not thrust the results of simulation model
until they have been validated by analytical
modeling or measurements. - Do not thrust the results of an analytical model
until they have been validated by a simulation
model or measurements. - Do not thrust the results of a measurement until
they have been validated by simulation or
analytical modeling.
36Selecting an evaluation technique Summary
37Selecting Performance Metrics
38Selecting performance metrics
- For each performance study, a set of performance
criteria or metrics must be chosen. - We can prepare this set by preparing the list of
services offered by the system. - The outcomes can be classified into three
categories - The system may perform correctly
- Incorrectly
- Refuse to performs the service.
39Selecting performance metrics (cont.)
- Example A gateway in a computer network offers a
service of forwarding packets to the specified
destinations on heterogeneous networks. When
presented with the packet - It may forward the packet correctly
- It may forward it to wrong destination
- It may be down
- Similarly a database may answer query correctly,
incorrectly, or may be down.
40Selecting metrics correct response
- If the system performs the service correctly, its
performance is measured - By the time taken to perform the service.
- The rate at which the service is performed
- And the resources consumed while performing the
service. - These three metrics related to timerate-resource
for successful performance and also called
responsiveness, productivity and utilization
metrics.
41Selecting metrics correct response
- For example, the responsiveness of a network
gateway is measured by response time the time
interval between arrival of a packet and its
successful delivery - The gateways productivity is measured with
throughput the number of packets forwarded per
unit time. - The utilization gives the indication of the
percentage of time the resources of the gateway
are busy for the given load level.
42Selecting metrics incorrect response
- If the system performs the service incorrectly,
its performance is measured - By classifying errors / packet loss
- Determining the probabilities of each class of
errors. - For example, in case of gateway
- We may want to find the probability of single-bit
errors, two-bit errors, and so on. - Also, we may want to determine the probability of
a packet being partially delivered.
43The possible outcomes of service request
44Metrics
45Metrics
- Most systems offer more than one metrics and the
number of metrics grows proportionately. - For many metrics mean value is important
- Also, variability is important.
- For computer systems, shared my by many users,
two types of metrics need to be considered
individual and global. - Individual metrics reflect the utility of each
user - Global metrics reflect the system wide utility
- Resource utilization, reliability and
availability are global metrics.
46Metrics
- Normally, the decision that optimizes individual
metric is different from the one that optimizes
system metric. - For example, in computer networks the performance
is measured by throughput (packets per second).
If the number of packets allowed in the system is
constant, increasing the number of packets from
one source may lead to increasing its throughput
, but it may also decrease someone elses
throughput. - So both system wide throughput and its
distribution among individual users must be
studied.
47Selection of Metrics
- Completeness The set of metrics included in the
study should be complete. - All possible outcomes should be reflected in the
set of performance metrics. - For example, in a study comparing different
protocols on a computer network, one protocol was
chosen as the best until it was found that the
best protocol lead to the highest number of
disconnections. - The probability of disconnection was then added
to the set of performance metrics.
48Commonly used performance metrics response time
- Response time is defined as the interval between
a users request and the system response. - This definition is simplistic since the requests
as well as responses are not instantaneous.
49Throughput
- Throughput is defined as the rate (requests per
unit of time) at which the requests can be
serviced by the system. - For networks, throughput is measured in packets
per second or bits per second.
50Throughput
- Throughput of the system increases as the load on
the system initially increases. - After a certain load the throughput stops
decreasing. - In most cases it starts decreasing
51Efficiency
- The ratio of maximum achievable throughput
(usable capacity) to nominal capacity is called
the efficiency. - For example if the maximum throughout from
100MBps LAN is only 85 Mbps, then its efficiency
is 85 percent. - The ratio of the performance of a n-processor
system to that of a one-processor system is its
efficiency.
52Utilization
- The utilization of the resource is measured as
the fraction of time the resource is busy
servicing requests. - It is the ratio of busy time and total elapsed
time over a given period. - Idle time the period during which a resource is
not being used is called the idle time. - System managers are often interested in balancing
the load.
Reliability
- The reliability of the system is measured by the
probability of errors or by the mean time between
errors.
53Availability
- The availability of the system is defined as the
fraction of the time the system is available to
service users requests. - The time during which the system is not available
is called down time. - The time during which the system is available is
called up time.
54Cost/Performance ratio
- Cost/performance ratio is commonly used as a
metric for comparing two or more systems. - Cost Hardware/software licensing, installation,
and maintenance over a given number of years. - Performance is measured in terms of throughput
under given response time constraint. - For example two transaction processing systems
may be compared in terms of dollars per TPS.
55Utility classification of performance metrics
- Higher is better or HB System users or system
managers prefer higher values of such metrics. - System throughout is an example of an HB metric .
- Lower is better or LB System users or system
managers prefer lower values of such metrics. - System response time
- Nominal is best or NB Both high and low values
are undesirable.
56Types of metrics
57Setting of performance requirements
- Main problem faced by performance analyst is to
specify the performance requirements for a
system to be acquired or designed. - General method
- the performance requirements are specified with
the help of requirement statements.
58Setting of performance requirements(Cont.)
- Consider the following requirement statements
- The system should be both processing and memory
efficient. It should not create excessive
overhead. - There should be an extremely low probability
that the network will duplicate a packet, deliver
a packet to wrong destination, or change the data
in a packet.
59Setting of performance requirements.
- What all these problems lack can be summarized in
one word SMART. - That is requirements must be specific,
measurable, acceptable, realizable, and thorough. - One should not use the words such as
low-probability and rare. - Measurability requires verification that given
system meets the requirements.