Title: Policy-based CPU-scheduling in VOs
1Policy-based CPU-scheduling in VOs
Catalin Dumitrescu, Mike Wilde, Ian Foster
2Some Background (Grid-style Monitoring)
3Some Background (Policy-based Sched)
4Introduction
V-Queue
V-Queue
VO-B
VO-A
S-Queue
S-Queue
S-Queue
Site 2
Site 1
Site 3
- For example, in the sciences, there may be
hundreds of institutions and thousands of
individual investigators that collectively
control tens or hundreds of thousands of
computers and associated storage systems -
- Each individual institution may participate in,
and contribute resources to, multiple
collaborative projects that can vary widely in
scale, lifetime, and formality
5Initial Model
V-Queue
V-Queue
V-PEP
V-PEP
VO-A
VO-B
Verifier
Verifier
S-PEP
S-PEP
S-PEP
S-Queue
S-Queue
S-Queue
Site 2
Site 1
Site 3
- Assumption Participants may wish to delegate to
one or more VOs the right to use certain
resources subject to local policy (and service
level agreements), and each VO then wishes to use
those resources subject to VO policy - How are such local and VO policies to be
expressed, discovered, interpreted, and enforced?
6Talk Overview
- Part I Model Detailing
- Architecture / Model description
- Policy language definition (syntaxsemantics)
- Part II Specific work
- Policy case scenarios (research focus)
- Algorithms
- Simulator Simulated model
- Dimension identification
- Part III Simulations Implementation issues
- Simulation results
- Related work
- Rolling out in Atlas/Grid3
- Future work Conclusions
7(No Transcript)
8Simplified Model
- Composed of
- R compute resource, (several individual compute
elements) - M associated manager, (designed to control
resource R's states) - P policy set, (a finite list of intents
expressed by administrators) - Some Rules
- M is authorized and responsible for enforcing P
with respect to R - P is composed only of rules that have direct
correlation with R - And the Mapping to a Concrete Case Scenario
- a cluster with 1 head-node and several
worker-nodes - compute resources are managed locally by one or
several pooling and/or queuing software managers
(e.g., Condor, PBS, LSF)
9Extended Model
- Composed of
- G several sites, where each of them is of type
S - M associated manager(s), (designed to control
S's states by delegation) - P policy set, a finite list of intents
expressed by administrators -
- Some Rules
- M is authorized and responsible to distributeP
with respect to R to R - P is composed of rules that have direct
correlation with G - distributed PR is composed only of rules that
have direct correlation with R - And the Mapping to Concrete Case Scenario
- a set of clusters of type S
- the monitoring and policy mechsnisms are
VO-Centric Ganglia (as prototype)
10Refined Prototype Model
11Policy Language Definition
- Two types of policies
- absolute its arguments are mapped to VOs and
site resources - relative its arguments are mapped to groups and
VOs' resources -
- Two types of contraints (open problem regarding
enforcement) - long term hard limitations and short term soft
limitations - identified by position in the presented syntax
-
- Proposed interpretations (to avoid ambiguities)
- long term hard limitations
- averaged over period (at most) sites provide (if
requested) at most the specified fraction over
the specified time interval -
- short term soft limitations
- upper limits over period (a maximum) sites may
provide up to the specified fraction over the
specified time interval, (but without any
guarantee in place)
12Simple/Proposed Syntax
- Two identifiable forms
-
- absolute policy
- resource (RESOURCE, ENTITY, LIST_POLICY)
- Examples
- resource (R, V1, (year, 20), (5minutes, 60))
- resource (R, V2, (year, 80), (5minutes, 90))
-
- relative policy
- subset (RESOURCE, ENTITY, GROUP, LIST_POLICY)
- Examples
- subset (R, V1, G1, (year, 30), (5minutes, 100))
- subset (R, V1, G2, (year, 70), (5minutes, 100))
-
- Note definitions and examples are independent
(i.e., R has different interpretations in the two
examples)
13Motivation for the Language
- Users can burn their allocation faster or slower,
controlled by the two limits - Possible to map to site RMs with node allocation
policies (e.g., Condor, OpenPBS LSF)
14Policy Case Scenarios
99
VO2
90
80
60
VO1
20
15Policy Case Scenarios
99
VO2
90
80
60
VO1
20
16Implemented Algorithms (Site)
Case 3 fill EPi (resource contention)
  else if (sum(BAk) TOTAL) (BAi lt EPi)
(Qi exists) then     if (j exists such that
BAj gt EPj) then       stop scheduling jobs for
VOj Need to fill with extra jobs? Â if
(BAi lt EPi BEi) then     schedule a job from
some Qi to the least loaded site ?? if
(EAi lt EPi) (Qi has jobs) then    schedule
additional backfill jobs
for each Gi with EPi, BPi, BEi do   Case 1
fill BPi BEi   if (Sum(BAj) 0) (BAi lt
BPi) (Qi has jobs) then     schedule a
job from some Qi to the least loaded site
  Case2 BAiltBPi (resources available)
  else if (SUM (BAk) lt TOTAL) (BAi
lt BPi) (Qi has jobs) Â Â Â Â schedule a job from
some Qi to the least loaded site
17Implemented Algorithms (VO)
for each VOi with EPi   Case 1 fill BPi   if
(Sum(BAj) 0) (BAi lt BPi) (Qi has jobs)
then     release a job from some Qi   Case 2
BAi lt BPi (resources available) Â Â else if
(Sum(BAk) lt TOTAL) (BAi lt BPi) (Qi has jobs)
then     release a job from some Qi  Case 3
fill EPi (resource contention) Â Â else if
(Sum(BAk) TOTAL) (BAi lt EPi) (Qi has jobs)
then     if (j exists such that BAj gt EPj) then
      stop scheduling jobs for VOj
18Simulations
- Structures
- 2 VOs 2 groups 1 planner with 3 clusters
- 6 VOs 3 groups 2 planners with 10 clusters
- Model
- S-PEP
- conitnuos monitoring
- jobs control by sending high level commands to
RMs - V-PEP
- gatekeeper type (access control point)
19Talk Overview
- Part I Model Detailing
- Architecture / Model description
- Policy language definition (syntaxsemantics)
- Part II Specific work
- Policy case scenarios (research focus)
- Algorithms
- Simulator Simulated model
- Dimension identification
- Part III Simulations Implementation issues
- Simulation results
- Related work
- Rolling out in Atlas/Grid3
- Future work Conclusions
20Initial Simulations (settings)
2 VOs 2 groups 1 planner with 3 clusters (1
(124) 1 (1248) 1 (1248) CPUs)
VO0
CPU Usage Policy
Jobs Statistics
VO1
21More Simulations
6 VOs 3 groups 2 planners with 20 clusters
(1 (124) ... CPUs)
22Overall CPU Usage
23Per Group CPU Usage
24Simulation Variations / Dimensions
- Algorithms
- Technical solution for mappings
- Site level trust
- Centralized vs. decentralized
- Total information /vs. inaccurate (stalled)
information
25(No Transcript)
26Future Work
- Mainly, Analysis on Several Dimensions
27Glimpse into Policy Setup
- Negotiation Advance Resource Reservation
SLA Document
SM-SLA
SN-SLA
Site A
Resources
RM
VMA-SLA
S-AP
SLA Iniatitor
S-PEP
VNA-SLA
Policy Rules
VM-SLA
VN-SLA
SM-SLA
User
V-RM
V-AP
SN-SLA
Site B
Resources
V-PEP
RM
S-AP
VO
S-PEP
Job Submission
28Conclusions