Title: CSE 57381 Computer Architecture Lecture 2 Metrics and Pipelining
1CSE 5/7381 Computer Architecture Lecture 2 -
Metrics and Pipelining
Arvind (MIT) Krste Asanovic(MIT/UCB)
Joel Emer (Intel/MIT) James Hoe
(MIT/CMU) David Patterson (UCB)
John Kubiatowicz (UCB) Fatih Kocan
(SMU)
2Review
- Other fields often borrow ideas from architecture
- Quantitative Principles of Design
- Take Advantage of Parallelism
- Principle of Locality
- Focus on the Common Case
- Amdahls Law
- The Processor Performance Equation
- Careful, quantitative comparisons
- Define, quantity, and summarize relative
performance - Define and quantity relative cost
- Define and quantity dependability
- Define and quantity power
- Culture of anticipating and exploiting advances
in technology - Culture of well-defined interfaces that are
carefully implemented and thoroughly checked
3Metrics used to Compare Designs
- Cost
- Die cost and system cost
- Execution Time
- average and worst-case
- Latency vs. Throughput
- Energy and Power
- Also peak power and peak switching current
- Reliability
- Resiliency to electrical noise, part failure
- Robustness to bad software, operator error
- Maintainability
- System administration costs
- Compatibility
- Software costs dominate
4Cost of Processor
- Design cost (Non-recurring Engineering Costs,
NRE) - dominated by engineer-years (200K per engineer
year) - also mask costs (exceeding 1M per spin)
- Cost of die
- die area
- die yield (maturity of manufacturing process,
redundancy features) - cost/size of wafers
- die cost f(die area4) with no redundancy
- Cost of packaging
- number of pins (signal power/ground pins)
- power dissipation
- Cost of testing
- built-in test features?
- logical complexity of design
- choice of circuits (minimum clock rates, leakage
currents, I/O drivers) - Architect affects all of these
5System-Level Cost Impacts
- Power supply and cooling
- Support chipset
- Off-chip SRAM/DRAM/ROM
- Off-chip peripherals
6What is Performance?
- Latency (or response time or execution time)
- time to complete one task
- Bandwidth (or throughput)
- tasks completed per unit time
7Definition Performance
- Performance is in units of things per sec
- bigger is better
- If we are primarily concerned with response time
" X is n times faster than Y" means
8Performance Guarantees
Inputs
A
B
C
Execution Rate
- Average Rate A gt B gt C
- Worst-case Rate A lt B lt C
9Types of Benchmark
- Synthetic Benchmarks
- Designed to have same mix of operations as real
workloads, e.g., Dhrystone, Whetstone - Toy Programs
- Small, easy to port. Output often known before
program is run, e.g., Nqueens, Bubblesort, Towers
of Hanoi - Kernels
- Common subroutines in real programs, e.g., matrix
multiply, FFT, sorting, Livermore Loops, Linpack - Simplified Applications
- Extract main computational skeleton of real
application to simplify porting, e.g., NAS
parallel benchmarks, TPC - Real Applications
- Things people actually use their computers for,
e.g., car crash simulations, relational
databases, Photoshop, Quake
10Performance What to measure
- Usually rely on benchmarks vs. real workloads
- To increase predictability, collections of
benchmark applications-- benchmark suites -- are
popular - SPECCPU popular desktop benchmark suite
- CPU only, split between integer and floating
point programs - SPECint2000 has 12 integer, SPECfp2000 has 14
integer pgms - SPECCPU2006 to be announced Spring 2006
- SPECSFS (NFS file server) and SPECWeb (WebServer)
added as server benchmarks - Transaction Processing Council measures server
performance and cost-performance for databases - TPC-C Complex query for Online Transaction
Processing - TPC-H models ad hoc decision support
- TPC-W a transactional web benchmark
- TPC-App application server and web services
benchmark
11Summarizing Performance
12 depends whos selling
Average throughput
Throughput relative to B
Throughput relative to A
13Summarizing Performance over Set of Benchmark
Programs
- Arithmetic mean of execution times ti (in
seconds) - 1/n ?i ti
- Harmonic mean of execution rates ri (MIPS/MFLOPS)
- n/ ?i (1/ri)
- Both equivalent to workload where each program is
run the same number of times - Can add weighting factors to model other workload
distributions
14Normalized Execution Timeand Geometric Mean
- Measure speedup up relative to reference machine
- ratio tRef/tA
- Average time ratios using geometric mean
- n?(?I ratioi )
- Insensitive to machine chosen as reference
- Insensitive to run time of individual benchmarks
- Used by SPEC89, SPEC92, SPEC95, , SPEC2006
- .. But beware that choice of reference machine
can suggest what is normal performance profile
15Vector/Superscalar Speedup
- 100 MHz Cray J90 vector machine versus 300MHz
Alpha 21164 - LANL Computational Physics Codes, Wasserman,
ICS96 - Vector machine peaks on a few codes????
16Superscalar/Vector Speedup
- 100 MHz Cray J90 vector machine versus 300MHz
Alpha 21164 - LANL Computational Physics Codes, Wasserman,
ICS96 - Scalar machine peaks on one code???
17How to Mislead with Performance Reports
- Select pieces of workload that work well on your
design, ignore others - Use unrealistic data set sizes for application
(too big or too small) - Report throughput numbers for a latency benchmark
- Report latency numbers for a throughput benchmark
- Report performance on a kernel and claim it
represents an entire application - Use 16-bit fixed-point arithmetic (because its
fastest on your system) even though application
requires 64-bit floating-point arithmetic - Use a less efficient algorithm on the competing
machine - Report speedup for an inefficient algorithm
(bubblesort) - Compare hand-optimized assembly code with
unoptimized C code - Compare your design using next years technology
against competitors year old design (1
performance improvement per week) - Ignore the relative cost of the systems being
compared - Report averages and not individual results
- Report speedup over unspecified base system, not
absolute times - Report efficiency not absolute times
- Report MFLOPS not absolute times (use inefficient
algorithm) - David Bailey Twelve ways to fool the masses
when giving performance results for parallel
supercomputers
18Benchmarking for Future Machines
- Variance in performance for parallel
architectures is going to be much worse than for
serial processors - SPECcpu means only really work across very
similar machine configurations - What is a good benchmarking methodology?
19Power and Energy
- Energy to complete operation (Joules)
- Corresponds approximately to battery life
- (Battery energy capacity actually depends on rate
of discharge) - Peak power dissipation (Watts Joules/second)
- Affects packaging (power and ground pins, thermal
design) - di/dt, peak change in supply current
(Amps/second) - Affects power supply noise (power and ground
pins, decoupling capacitors)
20Peak Power versus Lower Energy
Peak A
Peak B
Power
Integrate power curve to get energy
Time
- System A has higher peak power, but lower total
energy - System B has lower peak power, but higher total
energy
21A "Typical" RISC ISA
- 32-bit fixed format instruction (3 formats)
- 32 32-bit GPR (R0 contains zero, DP take pair)
- 3-address, reg-reg arithmetic instruction
- Single address mode for load/store base
displacement - no indirection
- Simple branch conditions
- Delayed branch
see SPARC, MIPS, HP PA-Risc, DEC Alpha, IBM
PowerPC, CDC 6600, CDC 7600, Cray-1,
Cray-2, Cray-3
22Example MIPS ( MIPS)
Register-Register
5
6
10
11
31
26
0
15
16
20
21
25
Op
Rs1
Rs2
Rd
Opx
Register-Immediate
31
26
0
15
16
20
21
25
immediate
Op
Rs1
Rd
Branch
31
26
0
15
16
20
21
25
immediate
Op
Rs1
Rs2/Opx
Jump / Call
31
26
0
25
target
Op
23Datapath vs Control
Datapath
Controller
Control Points
- Datapath Storage, FU, interconnect sufficient to
perform the desired functions - Inputs are Control Points
- Outputs are signals
- Controller State machine to orchestrate
operation on the data path - Based on desired function and signals
24Approaching an ISA
- Instruction Set Architecture
- Defines set of operations, instruction format,
hardware supported data types, named storage,
addressing modes, sequencing - Meaning of each instruction is described by RTL
on architected registers and memory - Given technology constraints assemble adequate
datapath - Architected storage mapped to actual storage
- Function units to do all the required operations
- Possible additional storage (eg. MAR, MBR, )
- Interconnect to move information among regs and
FUs - Map each instruction to sequence of RTLs
- Collate sequences into symbolic controller state
transition diagram (STD) - Lower symbolic STD to control points
- Implement controller
255 Steps of MIPS DatapathFigure A.2, Page A-8
Memory Access
Instruction Fetch
Instr. Decode Reg. Fetch
Execute Addr. Calc
Write Back
Next PC
MUX
Next SEQ PC
Zero?
RS1
Reg File
MUX
RS2
Memory
Data Memory
L M D
RD
MUX
MUX
Sign Extend
IR lt memPC PC lt PC 4
Imm
WB Data
RegIRrd lt RegIRrs opIRop RegIRrt
265 Steps of MIPS DatapathFigure A.3, Page A-9
Memory Access
Instruction Fetch
Execute Addr. Calc
Write Back
Instr. Decode Reg. Fetch
Next PC
MUX
Next SEQ PC
Next SEQ PC
Zero?
RS1
Reg File
MUX
Memory
RS2
Data Memory
MUX
MUX
Sign Extend
IR lt memPC PC lt PC 4
WB Data
Imm
RD
RD
RD
A lt RegIRrs B lt RegIRrt
rslt lt A opIRop B
WB lt rslt
RegIRrd lt WB
27Inst. Set Processor Controller
IR lt memPC PC lt PC 4
Ifetch
opFetch-DCD
A lt RegIRrs B lt RegIRrt
JSR
JR
ST
RR
r lt A opIRop B
WB lt r
RegIRrd lt WB
285 Steps of MIPS DatapathFigure A.3, Page A-9
Memory Access
Instruction Fetch
Execute Addr. Calc
Write Back
Instr. Decode Reg. Fetch
Next PC
MUX
Next SEQ PC
Next SEQ PC
Zero?
RS1
Reg File
MUX
Memory
RS2
Data Memory
MUX
MUX
Sign Extend
WB Data
Imm
RD
RD
RD
- Data stationary control
- local decode for each instruction phase /
pipeline stage
29Visualizing PipeliningFigure A.2, Page A-8
Time (clock cycles)
I n s t r. O r d e r
30Pipelining is not quite that easy!
- Limits to pipelining Hazards prevent next
instruction from executing during its designated
clock cycle - Structural hazards HW cannot support this
combination of instructions (single person to
fold and put clothes away) - Data hazards Instruction depends on result of
prior instruction still in the pipeline (missing
sock) - Control hazards Caused by delay between the
fetching of instructions and decisions about
changes in control flow (branches and jumps).
31One Memory Port/Structural HazardsFigure A.4,
Page A-14
Time (clock cycles)
Cycle 1
Cycle 2
Cycle 3
Cycle 4
Cycle 6
Cycle 7
Cycle 5
I n s t r. O r d e r
Load
DMem
Instr 1
Instr 2
Instr 3
Ifetch
Instr 4
32One Memory Port/Structural Hazards(Similar to
Figure A.5, Page A-15)
Time (clock cycles)
Cycle 1
Cycle 2
Cycle 3
Cycle 4
Cycle 6
Cycle 7
Cycle 5
I n s t r. O r d e r
Load
DMem
Instr 1
Instr 2
Stall
Instr 3
How do you bubble the pipe?
33Speed Up Equation for Pipelining
For simple RISC pipeline, CPI 1
34Example Dual-port vs. Single-port
- Machine A Dual ported memory (Harvard
Architecture) - Machine B Single ported memory, but its
pipelined implementation has a 1.05 times faster
clock rate - Ideal CPI 1 for both
- Loads are 40 of instructions executed
- SpeedUpA Pipeline Depth/(1 0) x
(clockunpipe/clockpipe) - Pipeline Depth
- SpeedUpB Pipeline Depth/(1 0.4 x 1) x
(clockunpipe/(clockunpipe / 1.05) - (Pipeline Depth/1.4) x
1.05 - 0.75 x Pipeline Depth
- SpeedUpA / SpeedUpB Pipeline Depth/(0.75 x
Pipeline Depth) 1.33 - Machine A is 1.33 times faster
35Data Hazard on R1Figure A.6, Page A-17
Time (clock cycles)
36Three Generic Data Hazards
- Read After Write (RAW) InstrJ tries to read
operand before InstrI writes it - Caused by a Dependence (in compiler
nomenclature). This hazard results from an
actual need for communication.
I add r1,r2,r3 J sub r4,r1,r3
37Three Generic Data Hazards
- Write After Read (WAR) InstrJ writes operand
before InstrI reads it - Called an anti-dependence by compiler
writers.This results from reuse of the name
r1. - Cant happen in MIPS 5 stage pipeline because
- All instructions take 5 stages, and
- Reads are always in stage 2, and
- Writes are always in stage 5
38Three Generic Data Hazards
- Write After Write (WAW) InstrJ writes operand
before InstrI writes it. - Called an output dependence by compiler
writersThis also results from the reuse of name
r1. - Cant happen in MIPS 5 stage pipeline because
- All instructions take 5 stages, and
- Writes are always in stage 5
- Will see WAR and WAW in more complicated pipes
39Forwarding to Avoid Data HazardFigure A.7, Page
A-19
Time (clock cycles)
40HW Change for ForwardingFigure A.23, Page A-37
MEM/WR
ID/EX
EX/MEM
NextPC
mux
Registers
Data Memory
mux
mux
Immediate
What circuit detects and resolves this hazard?
41Forwarding to Avoid LW-SW Data HazardFigure A.8,
Page A-20
Time (clock cycles)
42Data Hazard Even with ForwardingFigure A.9, Page
A-21
Time (clock cycles)
43Data Hazard Even with Forwarding(Similar to
Figure A.10, Page A-21)
Time (clock cycles)
I n s t r. O r d e r
lw r1, 0(r2)
sub r4,r1,r6
and r6,r1,r7
Bubble
ALU
DMem
or r8,r1,r9
How is this detected?
44Software Scheduling to Avoid Load Hazards
Try producing fast code for a b c d e
f assuming a, b, c, d ,e, and f in memory.
Slow code LW Rb,b LW Rc,c ADD
Ra,Rb,Rc SW a,Ra LW Re,e LW
Rf,f SUB Rd,Re,Rf SW d,Rd
- Fast code
- LW Rb,b
- LW Rc,c
- LW Re,e
- ADD Ra,Rb,Rc
- LW Rf,f
- SW a,Ra
- SUB Rd,Re,Rf
- SW d,Rd
Compiler optimizes for performance. Hardware
checks for safety.
45Control Hazard on BranchesThree Stage Stall
What do you do with the 3 instructions in
between? How do you do it? Where is the commit?
46Branch Stall Impact
- If CPI 1, 30 branch, Stall 3 cycles gt new
CPI 1.9! - Two part solution
- Determine branch taken or not sooner, AND
- Compute taken branch address earlier
- MIPS branch tests if register 0 or ? 0
- MIPS Solution
- Move Zero test to ID/RF stage
- Adder to calculate new PC in ID/RF stage
- 1 clock cycle penalty for branch versus 3
47Pipelined MIPS DatapathFigure A.24, page A-38
Memory Access
Instruction Fetch
Execute Addr. Calc
Write Back
Instr. Decode Reg. Fetch
Next SEQ PC
Next PC
MUX
Adder
Zero?
RS1
Reg File
Memory
RS2
Data Memory
MUX
MUX
Sign Extend
WB Data
Imm
RD
RD
RD
- Interplay of instruction set design and cycle
time.
48Four Branch Hazard Alternatives
- 1 Stall until branch direction is clear
- 2 Predict Branch Not Taken
- Execute successor instructions in sequence
- Squash instructions in pipeline if branch
actually taken - Advantage of late pipeline state update
- 47 MIPS branches not taken on average
- PC4 already calculated, so use it to get next
instruction - 3 Predict Branch Taken
- 53 MIPS branches taken on average
- But havent calculated branch target address in
MIPS - MIPS still incurs 1 cycle branch penalty
- Other machines branch target known before outcome
49Four Branch Hazard Alternatives
- 4 Delayed Branch
- Define branch to take place AFTER a following
instruction - branch instruction sequential
successor1 sequential successor2 ........ seque
ntial successorn - branch target if taken
- 1 slot delay allows proper decision and branch
target address in 5 stage pipeline - MIPS uses this
50Scheduling Branch Delay Slots (Fig A.14)
A. From before branch
B. From branch target
C. From fall through
add 1,2,3 if 10 then
add 1,2,3 if 20 then
sub 4,5,6
delay slot
delay slot
add 1,2,3 if 10 then
sub 4,5,6
delay slot
- A is the best choice, fills delay slot reduces
instruction count (IC) - In B, the sub instruction may need to be copied,
increasing IC - In B and C, must be okay to execute sub when
branch fails
51Delayed Branch
- Compiler effectiveness for single branch delay
slot - Fills about 60 of branch delay slots
- About 80 of instructions executed in branch
delay slots useful in computation - About 50 (60 x 80) of slots usefully filled
- Delayed Branch downside As processor go to
deeper pipelines and multiple issue, the branch
delay grows and need more than one delay slot - Delayed branching has lost popularity compared to
more expensive but more flexible dynamic
approaches - Growth in available transistors has made dynamic
approaches relatively cheaper
52Evaluating Branch Alternatives
- Assume 4 unconditional branch, 6 conditional
branch- untaken, 10 conditional branch-taken - Scheduling Branch CPI speedup v. speedup v.
scheme penalty unpipelined stall - Stall pipeline 3 1.60 3.1 1.0
- Predict taken 1 1.20 4.2 1.33
- Predict not taken 1 1.14 4.4 1.40
- Delayed branch 0.5 1.10 4.5 1.45
53Problems with Pipelining
- Exception An unusual event happens to an
instruction during its execution - Examples divide by zero, undefined opcode
- Interrupt Hardware signal to switch the
processor to a new instruction stream - Example a sound card interrupts when it needs
more audio output samples (an audio click
happens if it is left waiting) - Problem It must appear that the exception or
interrupt must appear between 2 instructions (Ii
and Ii1) - The effect of all instructions up to and
including Ii is totalling complete - No effect of any instruction after Ii can take
place - The interrupt (exception) handler either aborts
program or restarts at instruction Ii1
54Precise Exceptions In-Order Pipelines
Key observation architected state only change in
memory and register write stages.
55Summary Metrics and Pipelining
- Machines compared over many metrics
- Cost, performance, power, reliability,
compatibility, - Difficult to compare widely differing machines on
benchmark suite - Control VIA State Machines and Microprogramming
- Just overlap tasks easy if tasks are independent
- Speed Up ? Pipeline Depth if ideal CPI is 1,
then - Hazards limit performance on computers
- Structural need more HW resources
- Data (RAW,WAR,WAW) need forwarding, compiler
scheduling - Control delayed branch, prediction
- Exceptions, Interrupts add complexity
- Next time Read Appendix C!