Network Performance Analysis Strategies - PowerPoint PPT Presentation

1 / 59
About This Presentation
Title:

Network Performance Analysis Strategies

Description:

Design measurement and simulation experiments to provide the most information with least effort. ... Tools include modeling skills, simulation languages, and ... – PowerPoint PPT presentation

Number of Views:203
Avg rating:3.0/5.0
Slides: 60
Provided by: nori4
Category:

less

Transcript and Presenter's Notes

Title: Network Performance Analysis Strategies


1
Network Performance Analysis Strategies
  • Dr Shamala Subramaniam
  • Dept. Communication Technology Networks
  • Faculty of Computer Science IT, UPM
  • e-mail shamala_at_fsktm.upm.edu.my

2
Overview of Performance Evaluation
  • Intro Objective
  • The Art of Performance Evaluation
  • Professional Organizations, Journals, and
    conferences.
  • Performance Projects
  • Common Mistakes and How to Avoid Them
  • Selection of Techniques and Metrics

3
Why ?
4
(No Transcript)
5
Intro Objective
  • Performance is a key criterion in the design,
    procurement, and use of computer systems.
  • Performance ?? Cost
  • Thus, computer systems professionals need the
    basic knowledge of performance evaluation
    techniques.

6
Intro Objective
  • Objective
  • Select appropriate evaluation techniques,
    performance metrics and workloads for a system.
  • Conduct performance measurements correctly.
  • Use proper statistical techniques to compare
    several alternatives.
  • Design measurement and simulation experiments to
    provide the most information with least effort.
  • Perform simulations correctly.

7
Modeling
  • Model used to describe almost any attempt to
    specify a system under study.
  • Everyday connotation
  • physical replica of a system.
  • Scientific a model is a name given to a
    portrayal of interrelationships of parts of a
    system in precise terms. The portrayal can be
    interpreted in terms of some system attributes
    and is sufficiently detailed to permit study
    under a variety of circumstances and to enable
    the system s future behavior to be predicted.

8
Usage of Models
  • Performance evaluation of a transaction
    processing system (Salsburg, 1988)
  • A study of the generation and control of forest
    fires in California (Parks, 1964)
  • The determination of the optimum labor along a
    continuous assembly line in a factory (Killbridge
    and Webster, 1966)
  • An analysis of ship boilers (Tysso, 1979)

9
A Taxonomy of Models
  • Predictability
  • Deterministic all data and relationships are
    given in certainty. Efficiency of an engine based
    on temperature, load and fuel consumption.
  • Stochastic - at least some of the variables
    involved have a value which is made to vary in an
    unpredictable or random fashion. Example
    financial planning.
  • Solvability
  • Analytical simple
  • Simulation complicated or an appropriate
    equation cannot be found.

10
A Taxonomy of Models
  • Variability
  • Whether time is incorporated into the model
  • Static specific time (financial)
  • Dynamic any time value (food cycle)
  • Granularity
  • Granularity of their treatment in time.
  • Discrete events clearly some events (packet
    arrival)
  • Continuous models impossible to distinguish
    between specific events taking place (trajectory
    of a missile).

11
The Art of Performance Modeling
  • There are 3 ways to compare performance of two
    systems
  • Table 1.1
  • System Workload 1 Workload 2 Average
  • A 20 10
    15
  • B 10 20
    15

12
The Art of Performance Modeling (cont.)
  • Table 1.2 System B as the Base
  • System Workload 1 Workload 2 Average
  • A 2 0.5
    1.25
  • B 1 1
    1

13
The Art of Performance Modeling (cont.)
  • Table 1.3 System A as the Base
  • System Workload 1 Workload 2 Average
  • A 1 1
    1
  • B 2 0.5
    1.25

14
The Art of Performance Modeling (cont.)
  • Ratio Game

15
Performance Projects
I hear and forget. I see and I remember. I do and
I understand Chinese Proverb
16
Performance Projects
  • The best way to learn a subject is to apply the
    concepts to a real-system
  • The project should encompass
  • Select a computer sub-system a network
    congestion control, security, database, operating
    systems.
  • Perform some measurements.
  • Analyze the collected data.
  • Simulate AND Analytically model the subsystem
  • Predict its performance
  • Validate the Model.n

17
Professional Organizations, Journals and
Conferences
  • ACM Sigmetrics Association of Computing
    Machinerys.
  • IEEE Computer Society The Institute of
    Electrical and Electronic Engineers (IEEE)
    Computer Society.
  • IASTED The International Association of Science
    and Technology for Development

18
Common Mistakes and How to Avoid Them
  • No Goals
  • Biased Goals
  • Unsystematic Approach
  • Analysis without understanding The Problem
  • Incorrect Performance Metrics
  • Unrepresentative Workloads
  • Wrong Evaluation Techniques
  • Overlooking Important Parameters
  • Ignoring Significant Factors

19
Common Mistakes and How to Avoid Them
  • Inappropriate Experimental Design
  • Inappropriate Level of Detail
  • No Analysis
  • Erroneous Analysis
  • No Sensitivity Analysis
  • Ignoring Errors in Input
  • Improper Treatment of Outliers
  • Assuming No Change in the Future
  • Ignoring Variability

20
Common Mistakes and How to Avoid Them
  • Too Complex Analysis
  • Improper Presentation of Results
  • Ignoring Social Aspects
  • Omitting Assumptions and Limitations.

21
A Systematic Approach
  • State Goals and Define the System
  • List Services and Outcomes
  • Select Metrics
  • List Parameters
  • Select Factors to Study
  • Select Evaluation Technique
  • Select Workload
  • Design Experiments
  • Analyze and Interpret Data
  • Present Results

22
Selection of Techniques and Metrics
23
Overview
  • Key steps in performance evaluation technique
  • Selecting evaluation technique
  • Selecting a metric
  • Performance metrics
  • Problem of specifying performance requirements

24
Selecting an evaluation technique
  • Three techniques
  • Analytical modeling
  • Simulation
  • Measurement

25
Criteria for selection Life-cycle stage
  • Measurements are possible only if something
    similar to the proposed system already exists.
  • For a new concept, analytical modeling and
    simulation are the only techniques from which to
    choose.
  • It is more convincing if analytical modeling and
    simulation is based on previous measurement.

26
Criteria for selection Time required
  • In most situations, results are required
    yesterday. Then analytical modeling is probably
    the only choice.
  • Simulations take long time
  • Measurements take longer than analytical
    modeling.
  • If any thing go wrong, measurement will.
  • So time required for measurement varies.

27
Criteria for selection Availability of tools
  • Tools include modeling skills, simulation
    languages, and measurement instruments.
  • Many performance analysts are skilled in
    modeling. They would not touch real system at any
    cost.
  • Others are not as proficient in queuing theory
    and prefer to measure or simulate.
  • Lack of knowledge of the simulation languages
    and techniques keeps many analysts away from
    simulations.

28
Criteria for selection Level of accuracy
  • Analytical modeling requires so many
    simplifications and assumptions.
  • Simulations can incorporate more details and
    require less assumptions than analytical modeling
    and are often close to reality.

29
Criteria for selection Level of accuracy (cont.)
  • Measurements may not give accurate results
    simply because many of the environmental
    parameters such as system configuration, type of
    workload, and time of measurement and so on.
  • So, the accuracy of results can vary from very
    high to none with measurement techniques.
  • Note that, level of accuracy and corectness of
    conclusions are not identical.

30
Criteria for selection Trade-off evaluation
  • Goal of performance study compare different
    alternatives or to find the optimal parameter
    value.
  • Analytical models generally provide the best
    insights into the effects of various parameters
    and their interactions.

31
Criteria for selection Trade-off evaluation
  • With simulations it is possible to search the
    space of parameter values for the optimal
    combination.
  • Measurement is least desirable technique in this
    respect.

32
Criteria for selection Cost
  • Measurement requires real equipment, instruments,
    and time. It is most costly of the three
    techniques.
  • Cost is often the reason of simulating complex
    systems.
  • Analytical modeling requires only paper and
    pencils. Analytical modeling is the cheapest
    technique.
  • Can be decided based on cost allocated to the
    project.

33
Criteria for selection Saleability
  • Convincing others is important.
  • It is easy to convince with real measurement.
  • Most people are skeptical of analytical
    measurements, because they do not understand the
    techniques.

34

35
Criteria for selection Saleability (cont.)
  • So validation with other technique is important.
  • Do not thrust the results of simulation model
    until they have been validated by analytical
    modeling or measurements.
  • Do not thrust the results of an analytical model
    until they have been validated by a simulation
    model or measurements.
  • Do not thrust the results of a measurement until
    they have been validated by simulation or
    analytical modeling.

36
Selecting an evaluation technique Summary
37
Selecting Performance Metrics
38
Selecting performance metrics
  • For each performance study, a set of performance
    criteria or metrics must be chosen.
  • We can prepare this set by preparing the list of
    services offered by the system.
  • The outcomes can be classified into three
    categories
  • The system may perform correctly
  • Incorrectly
  • Refuse to performs the service.

39
Selecting performance metrics (cont.)
  • Example A gateway in a computer network offers a
    service of forwarding packets to the specified
    destinations on heterogeneous networks. When
    presented with the packet
  • It may forward the packet correctly
  • It may forward it to wrong destination
  • It may be down
  • Similarly a database may answer query correctly,
    incorrectly, or may be down.

40
Selecting metrics correct response
  • If the system performs the service correctly, its
    performance is measured
  • By the time taken to perform the service.
  • The rate at which the service is performed
  • And the resources consumed while performing the
    service.
  • These three metrics related to timerate-resource
    for successful performance and also called
    responsiveness, productivity and utilization
    metrics.

41
Selecting metrics correct response
  • For example, the responsiveness of a network
    gateway is measured by response time the time
    interval between arrival of a packet and its
    successful delivery
  • The gateways productivity is measured with
    throughput the number of packets forwarded per
    unit time.
  • The utilization gives the indication of the
    percentage of time the resources of the gateway
    are busy for the given load level.

42
Selecting metrics incorrect response
  • If the system performs the service incorrectly,
    its performance is measured
  • By classifying errors / packet loss
  • Determining the probabilities of each class of
    errors.
  • For example, in case of gateway
  • We may want to find the probability of single-bit
    errors, two-bit errors, and so on.
  • Also, we may want to determine the probability of
    a packet being partially delivered.

43
The possible outcomes of service request
44
Metrics
45
Metrics
  • Most systems offer more than one metrics and the
    number of metrics grows proportionately.
  • For many metrics mean value is important
  • Also, variability is important.
  • For computer systems, shared my by many users,
    two types of metrics need to be considered
    individual and global.
  • Individual metrics reflect the utility of each
    user
  • Global metrics reflect the system wide utility
  • Resource utilization, reliability and
    availability are global metrics.

46
Metrics
  • Normally, the decision that optimizes individual
    metric is different from the one that optimizes
    system metric.
  • For example, in computer networks the performance
    is measured by throughput (packets per second).
    If the number of packets allowed in the system is
    constant, increasing the number of packets from
    one source may lead to increasing its throughput
    , but it may also decrease someone elses
    throughput.
  • So both system wide throughput and its
    distribution among individual users must be
    studied.

47
Selection of Metrics
  • Completeness The set of metrics included in the
    study should be complete.
  • All possible outcomes should be reflected in the
    set of performance metrics.
  • For example, in a study comparing different
    protocols on a computer network, one protocol was
    chosen as the best until it was found that the
    best protocol lead to the highest number of
    disconnections.
  • The probability of disconnection was then added
    to the set of performance metrics.

48
Commonly used performance metrics response time
  • Response time is defined as the interval between
    a users request and the system response.
  • This definition is simplistic since the requests
    as well as responses are not instantaneous.

49
Throughput
  • Throughput is defined as the rate (requests per
    unit of time) at which the requests can be
    serviced by the system.
  • For networks, throughput is measured in packets
    per second or bits per second.

50
Throughput
  • Throughput of the system increases as the load on
    the system initially increases.
  • After a certain load the throughput stops
    decreasing.
  • In most cases it starts decreasing

51
Efficiency
  • The ratio of maximum achievable throughput
    (usable capacity) to nominal capacity is called
    the efficiency.
  • For example if the maximum throughout from
    100MBps LAN is only 85 Mbps, then its efficiency
    is 85 percent.
  • The ratio of the performance of a n-processor
    system to that of a one-processor system is its
    efficiency.

52
Utilization
  • The utilization of the resource is measured as
    the fraction of time the resource is busy
    servicing requests.
  • It is the ratio of busy time and total elapsed
    time over a given period.
  • Idle time the period during which a resource is
    not being used is called the idle time.
  • System managers are often interested in balancing
    the load.

Reliability
  • The reliability of the system is measured by the
    probability of errors or by the mean time between
    errors.

53
Availability
  • The availability of the system is defined as the
    fraction of the time the system is available to
    service users requests.
  • The time during which the system is not available
    is called down time.
  • The time during which the system is available is
    called up time.

54
Cost/Performance ratio
  • Cost/performance ratio is commonly used as a
    metric for comparing two or more systems.
  • Cost Hardware/software licensing, installation,
    and maintenance over a given number of years.
  • Performance is measured in terms of throughput
    under given response time constraint.
  • For example two transaction processing systems
    may be compared in terms of dollars per TPS.

55
Utility classification of performance metrics
  • Higher is better or HB System users or system
    managers prefer higher values of such metrics.
  • System throughout is an example of an HB metric .
  • Lower is better or LB System users or system
    managers prefer lower values of such metrics.
  • System response time
  • Nominal is best or NB Both high and low values
    are undesirable.

56
Types of metrics
57
Setting of performance requirements
  • Main problem faced by performance analyst is to
    specify the performance requirements for a
    system to be acquired or designed.
  • General method
  • the performance requirements are specified with
    the help of requirement statements.

58
Setting of performance requirements(Cont.)
  • Consider the following requirement statements
  • The system should be both processing and memory
    efficient. It should not create excessive
    overhead.
  • There should be an extremely low probability
    that the network will duplicate a packet, deliver
    a packet to wrong destination, or change the data
    in a packet.

59
Setting of performance requirements.
  • What all these problems lack can be summarized in
    one word SMART.
  • That is requirements must be specific,
    measurable, acceptable, realizable, and thorough.
  • One should not use the words such as
    low-probability and rare.
  • Measurability requires verification that given
    system meets the requirements.
Write a Comment
User Comments (0)
About PowerShow.com