A THEORETICAL SCHEDULING TOOLBOX Adam Wierman - PowerPoint PPT Presentation

1 / 57
About This Presentation
Title:

A THEORETICAL SCHEDULING TOOLBOX Adam Wierman

Description:

a theoretical scheduling toolbox adam wierman scheduling is everywhere scheduling has dramatic impact many applications many metrics many applications many ... – PowerPoint PPT presentation

Number of Views:125
Avg rating:3.0/5.0
Slides: 58
Provided by: scs88
Category:

less

Transcript and Presenter's Notes

Title: A THEORETICAL SCHEDULING TOOLBOX Adam Wierman


1
A THEORETICAL SCHEDULINGTOOLBOXAdam Wierman
2
SCHEDULING IS EVERYWHERE
disksrouters databases web servers
3
SCHEDULING HASDRAMATIC IMPACT
M/GI/1 Queue
20
mean response time
10
AND ITS FREE!
SRPT (optimal)
0
load
0
1
4
MANY APPLICATIONS ? MANY METRICS
Schedulingbandwidth atweb servers
  • small response times
  • fairness to flows
  • predictable service

5
MANY APPLICATIONS ? MANY METRICS
Schedulingbandwidth atweb servers
  • small response times
  • fairness to flows
  • predictable service

6
MANY METRICS ? MANY POLICIES
7
WHICH POLICY?
  • Metrics of interest
  • Metric 1
  • Metric 2
  • Metric 3

8
PRACTITIONERSNEED
simple heuristics and mechanisms to apply in
building application- specific policies
good performance for awide range of metrics
9
A NEW APPROACH
  1. Group policies based on

10
RS
Remaining size based
SRPT
DPS
LRPT
A NEW APPROACH
  1. Group policies based on

FSP
PSJF
PS
SJF
LCFS
PLJF
FCFS
LJF
PLCFS
LAS
11
Remaining size based
Age based
A NEW APPROACH
  1. Group policies based on

Preemptive size based
Non-preememptive
Time Sharing
12
Bias towards small jobs
A NEW APPROACH
  1. Group policies based on

Bias towards large jobs
13
EFFICIENCY METRICSFAIRNESS
METRICSROBUSTNESS METRICS
measure overall system performance
A NEW APPROACH
  1. Group policies based on
  2. Define new metrics

largely undefined
14
EFFICIENCY METRICSFAIRNESS
METRICSROBUSTNESS METRICS
measure overall system performance
A NEW APPROACH
  1. Group policies based on
  2. Define new metrics

compare the relative performance of different
types of jobs
measure performance in the face of exceptional
inputs and situations
15
A NEW APPROACH
  1. Group policies based on
  2. Define new metrics
  3. Classify groups on metrics

16
I PROPOSEA TOOLBOX OF CLASSIFICATIONS
  • Metrics of interest
  • Metric 1
  • Metric 2
  • Metric 3

Simple guidelines for building a policy that
performs well on Metrics 1,2,3.
17
I PROPOSEA TOOLBOX OF CLASSIFICATIONS
  • CLASS PROPERTIES
  • Any type T policy
  • will be unfair.
  • IMPOSSIBILITY
  • RESULTS
  • No type T policycan be both fair
  • and efficient.

18
OUTLINE
2
Efficiency
Practical Generalizations
5
Fairness
Introduction
3
1
Real-world Case Studies
6
Robustness
4
19
EFFICIENCY METRICS
measure the overall system performance
  1. mean response time
  2. variance of response time
  3. tail of response times
  4. weighted response time

20
SIMPLE HEURISTIC
  • Definition A work conserving
  • policy P is SMART if
  • a job of remaining size greater than x can never
    have priority over a job of original size x.
  • a job being run at the server can only be
    preempted by new arrivals.

bias towards small jobs
Sigmetrics 2005a
21
THEOREM
In an M/GI/1 system, for any SMART policy P
Sigmetrics 2005a
22
M/GI/1 Queue
FCFS
Sigmetrics 2005a
23
OTHER EFFICIENCYMETRICS
Are SMART policies near optimal for i. variance
of response time ii. tail of response time
distribution iii. expected slowdown
SMART?
SRPT
(Queija, Borst, Boxma, Zwart, and others)
Proposed work
24
Proposed work
25
OUTLINE
2
Efficiency
Practical Generalizations
5
Fairness
Introduction
3
1
Real-world Case Studies
6
Robustness
4
26
FAIRNESS METRICS
compare the relative performance for different
types of jobs
27
temporal
sizal
stream-based
1980s
2003
2005
2001
28
iTunes
29
box office
30
super- market
31
SIZAL FAIRNESS
jobs of different sizes should receive
proportional performance
iTunes
32
WHAT IS FAIR?
delay
job size
Everyone waits the same amount
33
SIZALFAIRNESS
A policy P is s-fair if ES(x)P 1/(1-?)for
all x. Otherwise, P is s-unfair.
Metric ES(x)P ET(x)P / x 1/x is the
correct factor for normalization because for all
P, ET(x)P ?(x)
  • Criterion 1 / (1-?)
  • ES(x)PS 1/(1-?)
  • - minP maxx ES(x)P 1/(1-?)
  • for unbounded distributions
  • - differentiates between distinct
  • functional behaviors

Perf Eval 2002 Sigmetrics 2003
34
SIZALFAIRNESS
A policy P is s-fair if ES(x)P 1/(1-?)for
all x. Otherwise, P is s-unfair.
Sigmetrics 2003
35
Always S-Unfair
SMART
Always S-Fair
RS
Sometimes S-Fair
FSP
Sigmetrics 2003
36
KEY PROOF IDEA
Theorem Any preemptive, size based policy, P,
is Always s-Unfair.
Case 1 A finite size, y, receives lowest
priority Case 2 No finite size receives the
lowest priority
The lowest priority job is treated unfairly
Sigmetrics 2003
37
KEY PROOF IDEA
Theorem Any preemptive, size based policy, P,
is Always s-Unfair.
Case 1 A finite size, y, receives lowest
priority Case 2 No finite size receives the
lowest priority
There is no lowest priority job, so look at
the infinite job size
Sigmetrics 2003
38
KEY PROOF IDEA
Theorem Any preemptive, size based policy, P,
is Always s-Unfair.
Case 1 A finite size, y, receives lowest
priority Case 2 No finite size receives the
lowest priority
1/(1-p) PSJF
ES(x)
x
0
This hump appears under many policies
Sigmetrics 2003
39
SIZALFAIRNESS
A policy P is s-fair if ET(x)P/x 1/(1-?)for
all x. Otherwise, P is s-unfair.
40
Variance
What is the right metric for comparing variability
across job sizes?
normalized variance
What should g(x) be?
41
A policy P is predictable if for all x.
Otherwise, P is unpredictable.
Variance
What is the right metric for comparing variability
across job sizes?
Metric VarT(x)P / x - VarT(x)P ?(x) for
common preemptive policies and VarT(x)P
O(x) for all policies.
  • Criterion ?EX2 / (1-?)3
  • differentiates between distinct
  • functional behaviors
  • we conjecture that
  • minP maxx VarT(x)P/x is
  • ?EX2 / (1-?)3

Sigmetrics 2005b
42
Always Unpredictable
Always Predictable
Sometimes Predictable
Sigmetrics 2005b
43
HIGHERMOMENTS
What is the right metric for comparing higher
moments across job sizes?
Perf. Eval. 2002 Sigmetrics 2005b
44
OUTLINE
2
Efficiency
Practical Generalizations
5
Fairness
Introduction
3
1
Real-world Case Studies
6
Robustness
4
45
OUTLINE
2
Efficiency
Practical Generalizations
5
Fairness
Introduction
3
1
Real-world Case Studies
6
Robustness
4
M/GI/1 Preempt-Resume
46
M/GI/1 PREEMPT-RESUME
Systems tend to have a limited number of priority
classes
Current work
47
M/GI/1 PREEMPT-RESUME
Many real systems depend on multiple servers.
QUESTA 2005 Perf Eval 2005
48
M/GI/1 PREEMPT-RESUME
Poisson arrivals can be unrealistic
Correlations between arrivals and
completions (open model vs. closed model)
Bursts of arrivals (batch arrivals)
Proposed
Under Submission
49
OUTLINE
2
Efficiency
Practical Generalizations
5
Fairness
Introduction
3
1
Real-world Case Studies
6
Robustness
4
Routers
Web servers
50
WEB SERVERS
need to schedule bandwidth to requests for files
  • Suggested Policies
  • PS
  • GPS variants
  • SRPT
  • SRPT-hybrids
  • FSP
  • many others
  • Harchol-Balter,
  • Schroeder, Rawat,
  • Kshemkalyani,
  • many others

51
ROUTERS
need to service to flows
input queues
  • Suggested Policies
  • FCFS
  • PS
  • GPS variants
  • LAS
  • LAS-hybrids
  • many others
  • Biersack, Rai,
  • Urvoy-Keller,
  • Bonald, Proutiere,
  • many others

classifier
incoming packets
transmit queue
52
WEB SERVERS and ROUTERS
Identifykey metrics
Determine appropriate heuristics
Compare with current approaches
53
OUTLINE
2
Efficiency
Practical Generalizations
5
Fairness
Introduction
3
1
Real-world Case Studies
6
Robustness
4
54
A NEW APPROACH
  1. Group policies based on
  2. Define new metrics
  3. Classify groups on metrics

55
Determine appropriate heuristics
Identifykey metrics
56
TIMELINE
To this point Spring/Summer 2005 Fall 2005/Winter
2006 Spring/Summer 2006
57
A THEORETICAL SCHEDULINGTOOLBOXAdam Wierman
  1. Wierman, Harchol-Balter. Insensitive bounds on
    SMART scheduling. Sigmetrics 2005.
  2. Harchol-Balter, Sigman, Wierman. Understanding
    the slowdown of large jobs in an M/GI/1 system.
    Perf. Eval. 2002.
  3. Wierman, Harchol-Balter. Classifying scheduling
    policies with respect to unfairness in an
    M/GI/1. Sigmetrics 2003.
  4. Wierman, Harchol-Balter. Classifying scheduling
    policies with respect to higher moments of
    response time. Sigmetrics 2005.
  5. Harchol-Balter, Osogami, Scheller-Wolf, Wierman.
    Analysis of M/PH/k queues with m priority
    classes. QUESTA (to appear).
  6. Wierman, Osogami, Harchol-Balter, Scheller-Wolf.
    How many servers are best in a dual priority
    FCFS system. Submitted to Perf. Eval.
  7. Schroeder, Wierman, Harchol-Balter. "Closed
    versus open system models Understanding their
    impact on performance evaluation and system
    design." Under submission.

http//www.cs.cmu.edu/acw/thesis/
58
A policy P is predictable if VT(x)P/x lambda
EX2/(1-?)3for all x. Otherwise, P is
unpredictable.
PREDICTABILITY
ALWAYS UNPRED.
SOMETIMES PRED.
ALWAYS PRED.
Pred. for all loads and distributions
Pred. for some loads and distributions, and
unpred. for other loads and distributions
Unpred. for all loads and distributions
Sigmetrics 2005
59
SMART
Sigmetrics 2005a
60
EXAMPLES
  • Definition A work conserving
  • policy P is SMART if
  • a job of remaining size greater than x can never
    have priority over a job of original size x.
  • a job being run at the server can only be
    preempted by new arrivals.

x
PS SJF LAS
x
x
Sigmetrics 2005a
61
PROOF TECHNIQUE
Theorem Any preemptive, size based policy, P,
is Always Unfair.
Case 1 A finite size, y, receives lowest
priority Case 2 No finite size receives the
lowest priority 2a PSJF 2b Other policies
62
EFFICIENCY METRICS
measure the overall system performance
Analysis of individualpolicies
Optimality results pairwise comparisons
1950s
1970s
2000s
Goal A simple heuristic that guarantees near
optimal efficiency.
63
WHY NOT ALWAYS USE SRPT
  • Do not know sizes
  • Can not use preemption
  • Limited number of classes

64
System limitations started
5
Practical Generalizations
SYSTEMS TEND TO HAVE A LIMITED NUMBER OF PRIORITY
CLASSES
65
System limitations started
5
Practical Generalizations
MANY REAL SYSTEMS DEPEND ON MULTIPLE SERVERS
Multiserver issues complete
QUESTA 2005 Perf Eval 2005
66
System limitations started
5
POISSON ARRIVALS CAN BE UNREALISTIC
Practical Generalizations
Multiserver issues complete
Correlations between arrivals and
completions (open model vs. closed model)
Bursts of arrivals (batch arrivals)
Generalarrival processes started
67
CUSTOMERS RENEGE IF FORCED TO WAIT TOO LONG
68
PREDICTABLE SERVICECAN PREVENT IMPATIENCE
69
PREDICTABLE SERVICECAN PREVENT IMPATIENCE
70
INACCURATE ESTIMATES OF JOB SIZE CAN AFFECT
PRIORITIZATION
71
CHOOSING A SCHEDULING POLICYIN PRACTICE
  • Metrics of interest
  • Metric 1
  • Metric 2
  • Metric 3

OPTION 1Try a bunch ofpolicies and choose the
best one.
OPTION 2Ask a scheduling guru
Time consuming no theoretical guarantees
Different perspectives
72
TEMPORAL FAIRNESS
Ticket box office
jobs should be served in close to the order
they arrive
73
TEMPORAL FAIRNESS
grocery store
jobs should be served in close to the order
they arrive
FCFS is the only temporally fair policy, but its
unfair to small jobs?
74
CURRENTMETRICS
  • Avi-Itzhak and Levy ? order fairness
  • Raz, Levy, and Avi-Itzhak ? RAQFM
  • Sandman ? DFBF
  • For jobs with equal service times, it is fairer
    to work on the one that arrived first.
  • For jobs that arrived at the same time, it is
    fairer to work on the shorter one.
  • difficult to analyze
  • combine notions of sizal and temporal

75
A POSSIBLE METRIC
Consider a job jx of size x. What percentage of
the response time of jx is a job that arrived
after jx being served? i.e. what percentage of
time is the seniority of jx is violated
Goal Develop the metric and classify groups of
policies
76
web servers
IMPATIENCE
customers renege if forced to wait too long
How does this impact SMART policies? Do large
customers renege more than small ones?
77
call centers
PREDICTABLE SERVICECAN PREVENT IMPATIENCE
78
SCHEDULING IS EVERYWHERE
79
ROBUSTNESS
measure performance in the face of exceptional
inputs and situations
server failures
inexact sizes/priorities
user impatience
Proposed work
80
M/GI/1 Queue
FCFS
Sigmetrics 2005a
81
Proposed work
82
HIGHERMOMENTS
What is the right metric for comparing higher
moments across job sizes?
Metric Normalized cumulant - All cumulants are
?(x) for common preemptive policies and
O(x) for all policies.
  • Criterion ???
  • We conjecture that the criteria
  • are the busy period moments

Perf. Eval. 2002 Sigmetrics 2005b
Write a Comment
User Comments (0)
About PowerShow.com