Title: Prof. Jerry Breecher
1Performance Engineering
Software Performance Engineering
2Whats In This Document?
An ACM Queue Podcast interview with a
performance analyst. Sample help wanted ads
what does the market define today when looking
for a Performance Engineer/Analyst. One aspect
of Performance Engineering building performance
into a product.
3Several Views Of What A Performance Analyst Does
See http//queue.acm.org/ Under Browse Topics,
click on performance. There are a number of
great articles there focused on practice.
- Five Common Issues When A Performance Problem
Exists - Product wasnt developed with a performance test
harness. - When a problem develops, no one takes
responsibility - The developers on-site, dont use the tools that
are available to solve the problem. - After developing a list of possible causes,
theres no elimination of the unlikely problems
people dont know how to determine what matters. - Often people dont have the patience just to sift
through the data.
- A List of Useful Tools
- dtrace instruments existing code by putting
breaks at known points - Vtune finds out what code is being executed.
- strace on LINUX prints out all the system calls
executed
4Bob Wescotts Rules Of Performance
Great book and the price is right. Gleaned from
many years of experience. Bob Wescott, The Every
Computer Performance Book ISBN-13
978-1482657753 http//www.treewhimsy.com/TECPB/Boo
k.html
The less a company knows about the work their
system did in the last five minutes, the more
deeply screwed up they are. ltIf you cant
measure it, you cant manage it.gt What you fail
to plan for, you are condemned to endure. ltBad
things WILL happen.gt If they dont trust
you, your results are worthless. ltBe clear how
much you trust your numbers.gt Always preserve
and protect the raw performance data. ltYou
massage it too much, and the data will lose its
meaning. Be able to get it back.gt The meters
should make sense to you at all times, not just
when it is convenient. ltKnow your tools what
they can and cannot do and what they give
you.gt If the response time is improving under
increased load, then something is broken. ltIf
results dont fit your model, something is
broken.gt If you have to model, build the least
accurate model that will do the job. ltAnd I
say, always have a model.gt Youll do this again.
Always take time to make things easier for your
future self. ltWrite down everything you
do.gt Ignore everything to the right of the
decimal point. ltSignificant figures!!!!gt Never
offer more than two possible solutions or discuss
more than three. ltKISSgt
5Sample Want Ads
Performance Testing
- Conduct reviews of application designs, business
and functional requirements - Implement test plans and cases based on technical
specifications - Design and execute automated and manual scripted
test cases - Document, maintain and monitor software problem
reports - Work with team members to resolve product
performance issues - Utilize multiple test tools to drive load and
characterize system performance - Execute tests and report performance/scalability
test results
Skills/Requirements 4-6 years post-graduation
experience in QA testing client and server
applications Demonstrated experience with MS
SQL server databases Experience with running UI
automated test scripts. Familiarity with
SilkTest preferred. Exposure to multi-threading
and network programming Undergraduate degree
from top-tier computer science/engineering
university
6Sample Want Ads
Performance Debugging
- Use home-grown and commercial tools to measure,
analyze, and characterize performance,
robustness, and scalability of the EdgeSuite
Platform - Serve as a technical point of escalation to
operations and customer care - Debug complex service issues service incidents,
complex customer setups, field trials,
performance issues, and availability issues - Enable specific capabilities to our operational
networks that are outside the capabilities of our
Operations group - Work across all technical areas in the company to
enable innovative new solutions that span
multiple technologies and services, often to meet
specific customer needs
- Skills/Requirements
- Familiarity with data analysis
- Experience in network operation and monitoring
- Depth networking principles and implementation,
including TCP/IP, UDP, DNS, HTTP, and SSL
protocols a plus. - Thorough understanding of distributed systems
- Experience with principles of software
development and design
7Sample Want Ads
Customer Performance
- Ever stay up all night trying to squeeze 3 more
fps out of your overclocked GPU? - Crash your bike because you were too busy
thinking of ways to speed up a nasty triply
nested loop? - Recompile your Linux kernel to extract that last
ounce ofperformance? We have a job waiting for
you.Endeca is seeking an energetic and driven
engineer to join our new System Analysis team. - Engineers on this team will be responsible for
exploring and understanding the behaviors and
characteristics of Endeca's system. - Members of this team will work with developers to
tune system performance, provide technical
guidance to architects building customer
applications, and help our customers continue to
achieve unprecedented levels of performance and
scalability.
- Skills/Requirements
- - 3 years experience in software engineering-
Undergraduate or graduate degree in computer
science, or equivalent depth of study in CS-
Familiarity with the process of software
performance investigation and tuning-
Experience with Linux and Windows- Experience
with scripting languages- Ability to grasp the
complexities of large distributed systems-
Strong analytical and troubleshooting skills-
Very highly motivated, quick learner
8Sample Want Ads
Performance Architect
The Performance Engineer will provide technical
leadership to the organization in the areas of
software frameworks and architecture,
infrastructure architecture, middleware
architecture and UI architecture. The
Performance Engineer is expected to have
versatile expertise in application performance
(DB, middleware, UI, infrastructure). This
engineer will collaborate with all teams within
IT to implement an application performance
measurement framework using end-to-end
performance measurement and monitoring tools.
Using data collected from these tools the
Performance Engineer will work with the
architects to influence application and
infrastructure design. This performance
engineer must demonstrate skill versatility in
the areas of application architecture,
infrastructure architecture and application
performance.
- JOB RESPONSIBILITY
- Implement end-to-end performance measurement
tools/frameworks. - Build processes around tools to conduct
application performance benchmarks. - Design application benchmarks that will simulate
application workloads. - Design and implement capacity measurement tools
and performance benchmarks and testing. - Ability to wear many hats to help expedite
multiple projects - Skills/Requirements
- Strong performance measurement skills using tools
like LoadRunner, SilkRunner. - Strong performance analysis skills with a
thorough understanding of application bottlenecks
and infrastructure bottlenecks (OS, Storage etc).
- Strong skills using performance
measurement/monitoring tools like BMC patrol, BMC
Perform/Predict, HP OpenView, MOM. - Hands on experience writing LoadRunner scripts
and simulating performance benchmarks - Experience with J2EE performance measurement
tools is a plus
9Sample Want Ads
Performance Architect
This individual will work with the systems
architects and key stakeholders to develop a
performance strategy for SSPG products and
implement a methodology to measure fine grained
resource utilization. This individual will
establish a set of benchmarks and a benchmark
methodology that include all supported storage
protocols, the control path, applications and
solutions. Will also be an evangelist for
performance within the group and ensure that
performance is a core SSPG competency. The
position requires strong hands on development
skills and a desire to work in a fast paced
collaborative environment. Candidate must have a
strong knowledge of operating system technology,
device drivers, multiprocessor systems, and
contemporary software engineering principles.Â
Skills/Requirements BS in CS/CE plus 7-10 years
experience or equivalent. Proven experience
with storage performance benchmarking and tuning
including hands on experience with performance
related applications such as Intel VTune, SpecFS,
and IOMeter Strong operating system knowledge
base with a focus on Linux, Windows and embedded
operating systems. Strong C/C programming and
Linux scripting experience. Knowledge of any of
the following protocols and technologies is a
plus iSCSI, TCP/IP, Fibre Channel, SAS, File
systems, RAID and storage systems Design and
development experience with embedded system is
desirable Candidate should possess excellent
verbal and written communications skills.
10Performance Engineering Motivation
Performance Engineering is the practice of
applying Software Engineering principles to the
product life cycle in order to assure the best
performance for a product. The purpose is to
know at each stage of development the performance
attributes of the product being built.
- This section is devoted to motivation, and
talking through a number of the guiding tenets of
Performance Engineering.
11Performance Engineering Motivation
Example Read the unbiased, "true-to-life"
example portrayed below and answer the questions
posed.
- A project is planned and scheduled under tight
constraints marketing feels that it is strategic
to offer this product, and upper management
inquires on a daily basis about the status of the
project. Numerous short-cuts are taken in the
design and implementation of the project. The
product gets to alpha "on schedule", but it's
discovered that the product is bug-ridden, and
performs at 1/10th the speed of the slowest
competitor. When the product finally ships, it's
6 months behind schedule, never wins a benchmark,
and serves only as a line item in the product
catalog. Within a year, a project is launched to
build it "right". - Does this sound familiar? Does this ever happen
in your life? - Is there ever a situation where such an
occurrence is acceptable? - In the above example, what if the quality was ok,
but the performance remained terrible would the
scenario then be acceptable? - Is it ever acceptable to not spec a product
because there won't be enough time?
12Performance Engineering Motivation
LOTS OF OTHER QUESTIONS RELATE TO THIS TOPIC
- Can you get performance for free? Does it
naturally fall out of a "good" design? - Can you add performance at the end of a project?
- Are performance problems as easy hard to fix
as functional bugs? - Is it easier to design in quality or performance?
- Its often stated that since performance is
decided by algorithms rather than by coding
methodology, it's primarily project leaders or
high level designers who need to worry about
performance. Do you agree with this? - What are the politics of performance estimation?
What happens if you don't meet your performance
goal? What will happen if you up-front make your
best guess and then your product comes in below
this guess (after all, it was a guess, just like
we've been doing in class?) Does putting an
uncertainty on the number make it OK? - Does it help to have management stress the
importance of performance? When it comes to the
crunch, does management emphasize Performance,
Quality, or Schedules?
13Performance Engineering Motivation
FOLKLORE Believe it or not, these are all
comments/excuses Ive heard!!
- It leads to more development time
- There will be Maintenance Problems (due to
tricky code). - It's too difficult to build in performance.
- Performance problems are rare.
- Performance doesn't matter on this product since
so few people will be using it. - Performance can be solved with hardware, and
hardware is (relatively) inexpensive. - We can tune it later.
- Sam and Sally and Sarah didn't have to worry
about performance, so it really isn't very
important. - Good performance is a natural byproduct of good
design and coding techniques. - If we move to hardware three times faster, the
problem will disappear.
14Performance Engineering Motivation
THE REALITY IS
- MANY systems initially perform TERRIBLY because
they weren't well designed. - Problems are often due to fundamental
architectural or design factors rather than
inefficient code. - Performance engineering is no more expensive than
software engineering. - Performance problems are visible and memorable.
- It's possible to avoid being surprised by the
performance of the finished product.
15Performance Engineering Motivation
THE BENEFITS OF PERFORMANCE ENGINEERING INCLUDE
- Good-performing systems result in
- User satisfaction
- User Productivity
- Development staff productivity
- Selling more systems and getting a bigger
paycheck. - Performance can be "orders of magnitude" better
with early, high level optimization.
- Timely implementation allows for
- Staff effectiveness
- Fire prevention rather than fire fighting.
- No surprises.
16Performance Engineering Motivation
THE COSTS OF PERFORMANCE ENGINEERING
- The critical path time to deliver is minimized if
modeling, analysis, and reporting are done
up-front. This is the Software Engineering
Religion. - Time is required by the design team. Performance
experts are part of that design team. - Time for modifications - Pay now or pay later.
- Cost of needed skills.
The way we do performance engineering today is
analogous to the marksman - he shoots first and
whatever he hits, he calls the target.
17Performance Engineering Introduction
YOU NEED TO BE A BIT MORE EXPLICIT ABOUT SOME OF
THE DETAILS!!
- In this section we begin looking at some of the
practical ways of doing Performance Engineering.
Performance Engineering isn't magic or
miraculous, but an organized mechanism for
building in performance.
18KEY POINTS IN PERFORMANCE ENGINEERING
The trickle down philosophy
- By setting broad, verifiable performance targets
at the beginning, in the Marketing Requirements,
we can track those targets through the whole
development lifecycle and verify along the way
that the goals are being met. - The goal is to show how to incorporate
performance information into the Standard
Development Life Cycle. Developers already have
within them the information needed to make
performance predictions they need only to
understand how to express that information.
19KEY POINTS IN PERFORMANCE ENGINEERING
- We build on Software Engineering Methodology
- This methodology employs a number of documents
and review mechanisms to ensure the completeness
and quality of our software. These same
techniques can be used to improve the performance
of systems neither quality nor performance is an
add-on, so the procedures in place to improve
quality can also be used to improve performance. - Performance is an intangible
- It's easy to see and describe a function, but
much harder to determine how fast it will go or
what resources it will devour. Performance
Engineering makes visible the performance
expectations of a new product and quantifies
whatever can be nailed down at any particular
point in the development cycle.
20KEY POINTS IN PERFORMANCE ENGINEERING
21KEY POINTS IN PERFORMANCE ENGINEERING
VALIDATION AND VERIFICATION
- Performance Engineering depends on a combination
of verification and validation. For those of you
who've forgotten this nuance, here's a brief
review
Validation Showing at project completion that the
performance meets the stated goals. Verification S
howing at each stage in the development that the
projected performance will meet the previously
stated goals.
The costs to VALIDATE performance? Establish
performance goals. Establish performance
tests. Schedule time for Performance Assurance
to do their thing. Schedule time to fix the
performance.
The costs to VERIFY performance? Establish
performance goals. Establish performance
tests. Schedule time for developers to conduct
analysis and inspections. Schedule time for
Performance Assurance since no one will believe
you've verified the performance.
22KEY POINTS IN PERFORMANCE ENGINEERING
SETTING MEASURABLE PERFORMANCE OBJECTIVES
- Unambiguous
- There should be no doubt as to what the goal
means. It is no good saying "A will be the same
as B" without saying what will be the same.
Specify in terms of CPU time, IO's, etc. Specify
also the environment that will be used. - Measurable
- Every performance goal must have an associated
measurement. The measurement must be defined as
carefully as the goal because it is the
measurement that will tell you that you have
reached your goal. Avoid vague goals without
well defined measurements they will lead to
unreasonable expectations being set for your
design.
23KEY POINTS IN PERFORMANCE ENGINEERING
SETTING MEASURABLE PERFORMANCE OBJECTIVES
- Metrics
- There are an infinite number of ways to measure
performance, many of them invalid, inaccurate, or
just plain dumb. The problem lies in trying to
state the performance of a complex system in
simple terms. - We will concentrate on
- Finding the most common paths/functions.
- Determining metrics for those paths.
- Defining tests to evaluate these metrics.
24PERFORMANCE ENGINEERING INSPECTIONS
What Are They?
- Performance Inspections are a technique, very
similar to Software Engineering inspections, for
analyzing performance issues during the
preparation of specifications. - The goal of inspections is to gather information
needed to complete the performance documentation.
- There's a mapping between
- REQUIREMENTS IN SPECS ltgt QUESTIONS ON
INSPECTIONS
25PERFORMANCE ENGINEERING INSPECTIONS
Practical Aspects of Doing Inspections
- These inspections should be conducted in a formal
way within one meeting. - There may well be questions generated that can
only be answered by more thorough research. - Experience shows an inspection requires several
hours, with a few more hours to resolve action
items. - Be careful -- like any inspection, several people
should be involved, including a dispassionate
outsider. - Be careful -- it's very possible to get so mired
in details that the whole performance business
becomes an overwhelming burden.
26PERFORMANCE ENGINEERING INSPECTIONS
Practical Aspects of Doing Inspections
- Whenever possible, make a guess. But clearly
label your guess and talk about the assumptions
going into it. - Software developers have a way of being overly
detail-conscious when it comes to gathering
performance numbers. - The specs themselves should contain answers to
the questions posed here. When reviewing the
document, those involved in the review should
insure that the questions are indeed answered. - In each of the following sections are questions
that might be asked during inspections. Many
others are also possible, especially those which
delve into the details of the specific project.
27PERFORMANCE ENGINEERING INSPECTIONS
Practical Aspects of Doing Inspections
- Whenever possible, make a guess. But clearly
label your guess and talk about the assumptions
going into it. - Software developers have a way of being overly
detail-conscious when it comes to gathering
performance numbers. - The specs themselves should contain answers to
the questions posed here. When reviewing the
document, those involved in the review should
insure that the questions are indeed answered. - In each of the following sections are questions
that might be asked during inspections. Many
others are also possible, especially those which
delve into the details of the specific project.
28PERFORMANCE ENGINEERING MARKETING REQUIREMENTS
DOCUMENTS
OVERALL GOALS AT THE REQUIREMENTS LEVEL
- Determine the best and worst expectations for
this product. - State the performance needed to meet marketing
needs this can range from "we must beat the
competition" to "get it out no matter how slow it
is". (As we've discussed, the second approach
will come back to haunt you.) - What is the "drop dead" point - the performance
below which the project shouldn't be done. - To determine a target at which we can aim later.
29PERFORMANCE ENGINEERING MARKETING REQUIREMENTS
DOCUMENTS
WHAT PERFORMANCE ITEMS SHOULD BE IN THE
REQUIREMENTS?
- What metrics matter.
- What are the current competitors products and
what performance do they achieve (or suffer.) - The current products you produce and the
performance they achieve. NOTE there is ALWAYS
a comparable product against which the
performance of a new product should be compared
NO ONE creates totally new product lines,
companies merely extend existing ones. - Overall performance goals. In order to be a
viable product, what are the maximum resources
that can be used.
30PERFORMANCE ENGINEERING MARKETING REQUIREMENTS
DOCUMENTS
WHAT PERFORMANCE ITEMS SHOULD BE IN THE
REQUIREMENTS?
- Placement in the market
- What are the expected/potential performance wins
in the new product. - What are the expected/potential performance
pitfalls in the new product. At this point,
there is little need for detail on how to combat
the problems - identification is enough. - Stretching the limits Where will the performance
of your company and of its competitors be in 1
year / 2 years? - Into what environment/market will this product be
sold? What other applications will be run on the
machine? What machine resources are available
for this product?
31PERFORMANCE ENGINEERING MARKETING REQUIREMENTS
DOCUMENTS
WHERE DOES THIS INFORMATION COME FROM?
- From Inspections (see the next section.)
- Input comes from marketing and from looking
around. - Determining expectations. Expectations are set
based on - Marketing
- Observing the competition
- Baseline of the previous product.
- The "field"
- Setting general performance goals. Goals should
be determined by, and expressed in terms of - Customer satisfaction
- Sales
- Benchmarks
- How to gather statistics. This can also be seen
as resolving general goals into metrics. A goal
of "customers will be happy" is all fine and
good, but it's difficult to measure. We need
real concrete metrics (we'll know we've succeeded
when we achieve these metrics.)
32PERFORMANCE ENGINEERING MARKETING REQUIREMENTS
DOCUMENTS
QUESTIONS TO USE ON A REQUIREMENTS INSPECTION
- What is the current performance of competitor's
products? - What is the current performance of your existing
products? (When none exist, use close cousins.) - Based on 1 and 2, what's the minimum performance
we need in order to achieve parity? - This can be answered by "as fast as Compaq",
"20 better than today", etc. - If the number is answered qualitatively rather
than quantitatively, how can a more solid number
be obtained ( and who will get it )? - In order to meet these minimum performance
requirements, is it acceptable to use the entire
machines resources?
33PERFORMANCE ENGINEERING MARKETING REQUIREMENTS
DOCUMENTS
QUESTIONS TO USE ON A REQUIREMENTS INSPECTION
- What performance problems/successes did the
competition encounter when introducing the
comparison product? What performance
problems/successes did you encounter when
introducing the comparison product? - These are "looking ahead" type goals
- To be a force in the market, what performance do
we need? - What performance increment would be required to
open new markets? - There are other types of questions asking about
environments - What fraction of a module can be used to produce
this performance? (What other work must the
machine carry on?) - How will customers be using this product what
are typical scenarios?
34PERFORMANCE ENGINEERING PROJECT PLAN/SCHEDULE
Detailed schedules should include work items such
as
- Preparation of Performance components of specs.
Analysis necessary to include performance
components in the various documentation. - Performance walkthroughs.
- Performance checkpoints ensuring at each stage
of the project that performance targets are being
met. - Final performance verification.
- Include time for performance enhancement - we
still don't know how to get it right the first
time.
35PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION
OVERALL GOALS AT THE FUNCTIONAL SPEC LEVEL
INCLUDE
- The goal of a functional spec is to define the
interfaces of a product (that is, address
environmental issues) and to describe how the
user of the product will view that interface,
without telling how the thing works. - The performance portions of the spec have the
same goal - We want to know who will call the function, and
what will be the most common modes they will use
-- we want to define the environment. - Comparison with the MRD
- Knowing the goals at the MRD level, it's possible
now to set limits in terms of definable resources
such as I/O and CPU. - We want to determine ways to assure that we've
been successful.
36PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION
SLIGHTLY MORE DETAILED GOALS
- It is reasonable to expect the following
performance information at this time - 1. Who will be calling this function?
Approximately how many times per second will this
function be called. Given the resource usage in
item 2, what fraction of the system resources
will be expended on this function? - TOTAL COST COST PER REQUEST TOTAL REQUESTS
- Having done this, you can answer
- If you can't win on all the functions you've
defined, which ones are the most important (must
wins!)? - Which situations provide big wins?
37PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION
SLIGHTLY MORE DETAILED GOALS
- 2. Set performance goals for CPU, I/O, and
memory. Though there is still no detailed
information on resource usage, it is time for
informed guesses. This means we expect answers
in milliseconds, furlongs, accesses/sec. etc. - Ultimately, you can estimate the final
performance! - 3. Here we divide up the total project and
estimate how many resources each part will take.
The mechanism defined in this functional spec, or
in all the functional specs addressing an MRD,
must be able to deliver the performance promised
in the MRD! - 4. How will success in meeting these goals be
measured. A description of the necessary tools
should be at the same level of detail as the
functional spec itself.
38PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION
WHAT PERFORMANCE ITEMS SHOULD BE IN THE
FUNCTIONAL SPEC?
- Your functional spec will normally defend
decisions made why was one algorithm chosen over
another, why store particular data in this spot,
etc. You should also include performance
factors defending decisions based on performance
criteria. - REMEMBER - the philosophy here is to make
estimates - no hard numbers make any sense at
this point.
39PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION
WHAT PERFORMANCE ITEMS SHOULD BE IN THE
FUNCTIONAL SPEC?
- 1. What is (are) the most frequently used
time-lines described in this spec? - What really matters is the small amount of code
that is frequently traveled. All other code can
be ignored. Techniques for determining this are
discussed. - How do you gather this data? The best method is
intuition. Sure it's possible to go off and make
lots of detailed measurements, but at this phase
of the project such detail may not be possible.
It's probably adequate to follow arguments such
as the following "This routine is used be every
system call, therefore it is frequently used." or
"This routine is called when opening a direct
queue, so it happens less often". - This item is designed simply to single out those
routines meriting further investigation. We'll
get more numerical later on. - The remaining questions apply only to these
often-used time-lines identified in item 1 all
other time-lines can be ignored.
40PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION
WHAT PERFORMANCE ITEMS SHOULD BE IN THE
FUNCTIONAL SPEC?
- 2. When determining resource numbers, make sure
you include the cost of calling routines at
layers below those defined by this spec. If you
don't know, guess. - What "lower level" functions will be called by
these time-lines? By "lower level" is meant
functions called by the mechanism you are
designing. - a. Estimate the CPU usage for the called
functions. - b. Estimate the disk usage for the called
functions. - c. Estimate the number of suspends for the called
functions, and include the cost of doing
suspends/reschedules in CPU usage.
41PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION
WHAT PERFORMANCE ITEMS SHOULD BE IN THE
FUNCTIONAL SPEC?
- 3. Specific resource-usage numbers for CPU,
memory and I/O. These numbers should be
estimated for the most common time-lines in the
most common environments. Where numbers are
available from previous revs or from the
competition, they should be included. - For the high-usage time-lines described in your
spec, estimate - a) CPU usage
- b) Disk usage
- c) Suspends/reschedules
- Based on the answers to questions 2 and 3, you
can determine the total cost of executing your
new high-usage functions.
42PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION
WHAT PERFORMANCE ITEMS SHOULD BE IN THE
FUNCTIONAL SPEC?
- 4. How many times per second will these
time-lines be called by higher level functions?
This is an environment question you may have
figured this out already when you identified in
question 1 that certain functions were
"high-usage". - 5. Based on 4, and the sum of 2 3, what
fraction of the total system resources (
utilization ) are used by this time-lines? - 6. What fraction of the resources called out in
the MRD will be used by these time-lines.
43PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION
WHAT PERFORMANCE ITEMS SHOULD BE IN THE
FUNCTIONAL SPEC?
- 7. Checkpointing When you add up all the
time(s) in your most commonly used time-lines,
did you get a number consistent with what you
estimated in the MRD? - 8. What are the metrics (what will you measure)
in order to assure the performance given above?
(Do NOT describe how to measure at this point.) - 9. Describe in general terms how you expect to
measure that these goals have been met. A
description of the necessary methodology should
be at the same level of detail as the functional
spec itself.
44PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION
WHERE DOES THIS INFORMATION COME FROM?
- Lot's of information has been requested here in
order to meet the ultimate goal of determining
the total resource usage of your product. Here
are some of the places where you can find help in
preparing numbers - The MRD.
- Previously known performance
- Previous products (how fast did this system call
run in the last rev?) - How fast can the competition do this operation?
- Benchmarks of system performance.
- Intuition.
- The philosophy which says all the performance and
resources must come from one pie you can only
cut it so many ways pies and resources are both
finite.
45PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION
QUESTIONS TO USE ON A FUNCTIONAL SPEC WALKTHROUGH
- 0. What other algorithms were looked at why was
this determined to be the best for performance
reasons? - The philosophy here is to make guesses - no hard
numbers make any sense at this point. - The use of "time_lines" is explained in the unit
on Design Strategies. - 1. What is ( are ) the most frequently used
time-lines described in this spec? - THE REMAINING QUESTIONS apply only to these
often-used time-lines all other time-lines can
be ignored.
46PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION
QUESTIONS TO USE ON A FUNCTIONAL SPEC WALKTHROUGH
- 2. What lower level functions will be called by
these time-lines? - Estimate the time for CPU usage for the called
functions. - Estimate the time for disk usage for the called
functions. - Estimate the time spent in interrupts resulting
both directly and indirectly from this function. - Estimate the number of suspends for the called
functions, and include the time for doing
suspends/reschedules. - Estimate the amount of time a lock will be held
by this function, and thus the percentage
contention on the lock. Include this contention
in your time-line. - 3. For the high-usage time-lines themselves, as
described in your spec, estimate - CPU usage
- Disk usage
- Suspends/reschedules
47PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION
QUESTIONS TO USE ON A FUNCTIONAL SPEC WALKTHROUGH
- 4. How many times per second will these
time-lines be called by higher level functions? - 5. Based on 4, and the sum of 2 3, what
fraction of the total system resources
(utilization) are used by this time-line? - 6. What fraction of the resources called out in
the MRD will be used by these time-lines. - 7. When you add up all the resources, do they
equal what was specified in the MRD? - 8. What are the metrics (what will you measure)
in order to assure the performance given above?
(Do NOT describe the details of measurement at
this point remember, this is a functional level.)
48PERFORMANCE ENGINEERING DESIGN SPECIFICATION
OVERALL GOALS AT THE DESIGN SPEC LEVEL INCLUDE
- This is where you should be able to make detailed
estimates. And this is where you have a real
chance to ensure that the numbers youve been
guessing are real. At this point, in the design,
you should be able to make very concrete
assumptions. - Again, you can roll the detailed numbers you get
back into the functional spec and requirements.
Will the product perform as required? Now you
know.
49PERFORMANCE ENGINEERING DESIGN SPECIFICATION
QUESTIONS TO USE ON A DESIGN SPEC INSPECTION
- REMEMBER - the philosophy here is to get numbers.
These numbers should be as accurate as possible,
but the code isn't written so the data can only
be a best guess. - NOTE ALSO - the methodology is the same as used
at Functional Spec level. - 0. What metrics matter?
- 1a. Are the most-used time-lines the same as
they were in the functional spec? If not, or
none were defined, what are they? - 1b. What are the low level library routines that
are important in this design? Identify those
routines that have a large fan-in.
50PERFORMANCE ENGINEERING DESIGN SPECIFICATION
QUESTIONS TO USE ON A DESIGN SPEC INSPECTION
- THE FOLLOWING QUESTIONS APPLY ONLY TO THE HEAVILY
USED PATHS. - 2. Determine the low level functions, in other
components, called by your time-lines. These are
routines subsidiary to those in the spec. What
are the costs of using these functions? As
before, these costs include - CPU usage
- Disk usage
- Suspends/reschedules
- Other
- 3. Calculate also the costs to do CPU in your
own routines. This means you can estimate the
total lines of code you'll run. - Do these calculations for both library routines
and often-used time lines, though the library
routine work is meant mainly to raise red flags.
51PERFORMANCE ENGINEERING DESIGN SPECIFICATION
QUESTIONS TO USE ON A DESIGN SPEC INSPECTION
- THE FOLLOWING QUESTIONS APPLY ONLY TO THE HEAVILY
USED PATHS. - 4. What is the frequency of calling the high
level often-used routines and also the frequency
for the library routines? - 5. Based on 4, and the sum of 2 3, what
fraction of the total system resources
(utilization) are used by these time-lines? - 6. What fraction of the resources called out in
the Functional Spec will be used by these
timelines?
52PERFORMANCE ENGINEERING DESIGN SPECIFICATION
QUESTIONS TO USE ON A DESIGN SPEC INSPECTION
- THE FOLLOWING QUESTIONS APPLY ONLY TO THE HEAVILY
USED PATHS. - 7. Checkpointing When you add up all the
time(s) in your most commonly used time-lines,
did you get a number consistent with what you
estimated in the Functional Spec? - 8. Are there portions of the high-use functions
that would benefit from being written in
assembler? - 9. What kind of performance tests will be used?
At the design level these tests should be fairly
specific. The goal is to build measurements that
will look at the most used paths - these are not
paranoid QA tests. Specifically, how do these
tests measure the metrics you consider important?
53CONCLUSION
- This section has laid out a detailed methodology
for assuring that the performance of a product
being developed has the performance required of
it when its completed.