Title: Teradata Priority Scheduler
1Teradata Priority Scheduler
2Current Situation
3New Situation
4Priority Scheduler Capabilities
- Mixed workload management on a single platform
- Control resource sharing among different
applications - Enable more consistent response times
- Improved service effectiveness for more important
work
5Example Service Effectiveness
Throughput achieved with and without High/Low
Priorities Query-Per-Second Rate for 100,000
Single Customer Reads
800
700
Same Priority
600
High vs Low
500
Queries Per Second
400
300
200
100
0
0 Streams
5 Streams
10 Streams
20 Streams
30 Streams
Background
Background
Background
Background
Background
6ExampleResponse Time Consistency
Average Query Time for 100,000 Single Customer
Reads With and without High/Low Priorities
(Actual Test)
0.25
Same Priority
0.20
High vs Low
0.15
Average Time in Seconds
0.10
0.05
0.00
0 Streams
5 Streams
10 Streams
20 Streams
30 Streams
Background
Background
Background
Background
Background
7Priority Scheduler is Not
- Hardware Segmentation Tool
- Priority Scheduler performs logical segmentation,
not physical segmentation - Group Management Tool
- Priority Scheduler provides relative priority
- CPU Optimizer
- Priority Scheduler cannot assist poorly written
queries
8Priority Scheduler Components
RP1
RP4
Resource Partition
RP0 Default
Performance Groups
L1
M1
H1
R1
L4
M4
H4
R4
L
M
H
R
L
M
H
R
Management schmon Tool
6am-8am AG22
8am-11pm AG20
11pm-6am AG21
8am-8am AG1
Performance Periods
Allocation Group
AG1 5
AG20 20
AG21 40
AG22 5
AG30 5
AG31 10
AG32 20
9Summary of Components(For your reference)
- Resource Partition (RP)
- High-level resource and priority grouping
- May define up to 5
- Partition 0 is provided
- Performance Group (PG)
- Must define 8 (values 0 - 7) within each RP
- Only PGs with values of 0,2,4,6 are for users
(1,3,5,7 are for system use) - PG name matches to acctid string on logon
statement, must be unique system-wide
- Performance Period (PP)
- 1 to 5 per PG
- Links a PG to an AGs weight and policy
- Makes possible changes in priority weight/policy
by time or resource usage - Allocation Group (AG)
- Carries the weight
- Determines the Policy and Set Division Type
- PGs may share the same Allocation Groups
10Performance Groups
RP1
Resources Partition
Users are assigned to a PG
Performance Groups
L1
M1
H1
R1
At Logon Users are associated to an AG via a PP
6am-8am AG22
8am-11pm AG20
11pm-6am AG21
Performance Periods
Allocation Group
AG20 20
AG21 40
AG22 5
Users execute at the weight of the assigned AG
11Performance Periods by Time
Performance Period 1 End-time 1700
hours Allocation Group 9
- T is type of Time
- VALUE is the END TIME
- Define Periods with Performance Group
- Up to four per Performance Group
Alloc Grp 9 Weight60 IMMEDIATE
Performance Period 2 End-time 2300
hours Allocation Group 5
Alloc Grp 5 Weight20 Default
Performance Period 3 End-time 0700
hours Allocation Group 6
Alloc Grp 6 Weight10 Absolute
12Performance Periods by CPU Usage
Automatic Change Based on CPU Usage Since Session
Logon
User logs on
User reaches 100 Seconds
CPU accumulation
Time in Seconds
0
50
100
125
Performance Period 0 Usage 100
Seconds Allocation Group 9
Performance Period 1 Usage 0 Seconds Allocation
Group 33
Alloc Grp 9 Weight20 Default
Alloc Grp 33 Weight5 Default
13Allocation Group Policies
Unrestricted Policies
Restricted Policies
- ABS (ABSOLUTE)
- The assigned weight of the allocation group
becomes the ceiling value - This ceiling is the same no matter what other
allocation groups are active - Only when the allocation of CPU is greater than
the ABS will the ceiling have an impact
- DEF (DEFAULT)
- Keeps track of past usage of each process
- Seeks out adjusts over- or under-consumers
- May benefit complex queries or mixed work
- Is the default policy
- IMD (IMMEDIATE)
- All processes in the Allocation Group are treated
as equal - Ignores uneven process usage
- Preferable for short, repetitive work
- REL (RELATIVE)
- Makes the Allocation Group Wgt value the
ceiling on CPU - This ceiling will change based on which
allocation groups are active at the time
14Same Job - Different Policies All tests are run
in M with the same assigned weight of 10
10 Concurrent Streams of Short 3-second Queries
(Actual Tests)
800
Test 4
Test 2
Test 3
Test 1
700
719
600
585
500
400
Time in Seconds
300
200
100
10/70
10
87
89
0
DEFAULT
IMMEDIATE
RELATIVE
ABSOLUTE
10 of CPU
100 of CPU
14 of CPU
100 of CPU
Assume Only M, H and R active. Sum of weights
is 70. The relative weight of M would be 10/70
or 14.
15Implementation - Basic Steps
- Understand the goals of using the DW
- What workloads at what priority at what time
- Set up Service Level Agreements (SLAs) for
different categories of work - Understand current performance and usage patterns
before making changes - Take small steps and grow
16Key Components In Place and Active
Default
Resource Partition (RP)
0
6
2
4
Performance Groups (PG)
H
M
L
R
1
7
3
5
Rush
Med
High
Low
Performance Periods (PP)
Allocation Groups
1
4
3
2
All users go to M in Partition 0 by default If
few priority divisions are needed, use Partition
0 only
17Be Prepared - Collect Data
- On Resource Usage patterns
- CPU and I/O by user, by hour (source ampusage
- Overall system usage (source resusage)
- Current Query Wall clock time
- On current performance of environment
- Hourly CPU consumption by workload by day
- Numbers of queries per hour per session
- Complexity of queries (CPU versus I/O intensive)
- As stated so many times before EXPLAIN,
EXPLAIN...
18Adding Resource Partition 1 (RP1)
1.Assign an RP name 2.Determine weight 3.Define
Allocation Groups with weights 4.Define name
all 8 Performance Groups 5.Modify users account
strings to point to new performance group names
CriticalDS
0
6
2
4
H1
M1
L1
R1
1
7
3
5
R1
H1
L1
M1
43
40
42
41
19SCHMON UtilityA UNIX-based Facility
- PSF managed by issuing UNIX commands
- UNIX Root privileges are required
- Two key UNIX commands
- schmon -d Displays the current PSF setting
- schmon -m Reports current resource usage by
group - All changes take place immediately across all
nodes - No perceivable performance overhead
20Summary
- Users are associated with allocation groups when
they log on - Relative relationship among assigned weights of
active allocation groups determines CPU
allocations - Partition boundaries disappear once weight is
calculated - A constant assessment of recent resource usage
takes place - Unconsumed resources are shared based on relative
weight differences