Title: Idealized Piecewise Linear Branch Prediction
1Idealized Piecewise Linear Branch Prediction
Daniel A. Jiménez Department of Computer
Science Rutgers University and Departament
d'Arquitectura de Computadors Universitat
Politècnica de Catalunya
2This Talk
- Brief introduction to conditional branch
prediction - Some motivation, some background
- Introduction to neural branch prediction
- Perceptron predictor
- Mathematical intuition
- Some pictures and movies
- Piecewise Linear Branch Prediction
- The algorithm
- Why its better
- Idealized Piecewise Linear Branch Prediction
- My entry into the championship branch predictor
contest
3Pipelining and Branches
Pipelining overlaps instructions to exploit
parallelism, allowing the clock rate to be
increased. Branches cause bubbles in the
pipeline, where some stages are left idle.
Instruction fetch
Instruction decode
Execute
Memory access
Write back
Unresolved branch instruction
4Branch Prediction
A branch predictor allows the processor to
speculatively fetch and execute instructions down
the predicted path.
Instruction fetch
Instruction decode
Execute
Memory access
Write back
Speculative execution
Branch predictors must be highly accurate to
avoid mispredictions!
5Branch Predictors Must Improve
- The cost of a misprediction is proportional to
pipeline depth - As pipelines deepen, we need more accurate branch
predictors - Pentium 4 pipeline has 31 stages!
- Deeper pipelines allow higher clock rates by
decreasing the delay of each pipeline stage - Decreasing misprediction rate from 9 to 4
results in 31 speedup for 32 stage pipeline
Simulations with SimpleScalar/Alpha
6Previous Work on Branch Prediction
- The architecture literature is replete with
branch prediction papers - Most refine two-level adaptive branch prediction
Yeh Patt 91 - A 1st-level table records recent global or
per-branch pattern histories - A 2nd-level table learns correlations between
histories and outcomes - Refinements focus on reducing destructive
interference in the tables - Some of the better refinements (not an exhaustive
list) - gshare McFarling 93
- agree Sprangle et al. 97
- hybrid predictors Evers et al. 96
- skewed predictors Michaud et al. 93
7Conditional Branch Prediction is a Machine
Learning Problem
- The machine learns to predict conditional
branches - So why not apply a machine learning algorithm?
- Artificial neural networks
- Simple model of neural networks in brain cells
- Learn to recognize and classify patterns
- We used fast and accurate perceptrons
Rosenblatt 62, Block 62 for dynamic branch
prediction Jiménez Lin, HPCA 2001 - We were the first to use single-layer perceptrons
and to achieve accuracy superior to PHT
techniques. Previous work used LVQ and MLP for
branch prediction Vintan Iridon 99.
8Input and Output of the Perceptron
- The inputs to the perceptron are branch outcome
histories - Just like in 2-level adaptive branch prediction
- Can be global or local (per-branch) or both
(alloyed) - Conceptually, branch outcomes are represented as
- 1, for taken
- -1, for not taken
- The output of the perceptron is
- Non-negative, if the branch is predicted taken
- Negative, if the branch is predicted not taken
- Ideally, each static branch is allocated its own
perceptron
9Branch-Predicting Perceptron
- Inputs (xs) are from branch history and are -1
or 1 - n 1 small integer weights (ws) learned by
on-line training - Output (y) is dot product of xs and ws predict
taken if y 0 - Training finds correlations between history and
outcome
10Training Algorithm
11What Do The Weights Mean?
- The bias weight, w0
- Proportional to the probability that the branch
is taken - Doesnt take into account other branches just
like a Smith predictor - The correlating weights, w1 through wn
- wi is proportional to the probability that the
predicted branch agrees with the ith branch in
the history - The dot product of the ws and xs
- wi xi is proportional to the probability that
the predicted branch is taken based on the
correlation between this branch and the ith
branch - Sum takes into account all estimated
probabilities - Whats ??
- Keeps from overtraining adapt quickly to
changing behavior
12Organization of the Perceptron Predictor
- Keeps a table of m perceptron weights vectors
- Table is indexed by branch address modulo m
- Jiménez Lin, HPCA 2001
13Mathematical Intuition
A perceptron defines a hyperplane in
n1-dimensional space
For instance, in 2D space we have
This is the equation of a line, the same as
14Mathematical Intuition continued
In 3D space, we have
Or you can think of it as
i.e. the equation of a plane in 3D space This
hyperplane forms a decision surface separating
predicted taken from predicted not taken
histories. This surface intersects the feature
space. Is it a linear surface, e.g. a line in
2D, a plane in 3D, a cube in 4D, etc.
15Example AND
- Here is a representation of the AND function
- White means false, black means true for the
output - -1 means false, 1 means true for the input
-1 AND -1 false -1 AND 1 false 1 AND -1
false 1 AND 1 true
16Example AND continued
- A linear decision surface (i.e. a plane in 3D
space) intersecting the feature space (i.e. the
2D plane where z0) separates false from true
instances
17Example AND continued
- Watch a perceptron learn the AND function
18Example XOR
-1 XOR -1 false -1 XOR 1 true 1 XOR -1
true 1 XOR 1 false
Perceptrons cannot learn such linearly
inseparable functions
19My Previous Work on Neural Predictors
- The perceptron predictor uses only pattern
history information - The same weights vector is used for every
prediction of a static branch - The ith history bit could come from any number of
static branches - So the ith correlating weight is aliased among
many branches - The newer path-based neural predictor uses path
information - The ith correlating weight is selected using the
ith branch address - This allows the predictor to be pipelined,
mitigating latency - This strategy improves accuracy because of path
information - But there is now even more aliasing since the ith
weight could be used to predict many different
branches
20Piecewise Linear Branch Prediction
- Generalization of perceptron and path-based
neural predictors - Ideally, there is a weight giving the correlation
between each - Static branch b, and
- Each pair of branch and history position (i.e. i)
in bs history - b might have 1000s of correlating weights or just
a few - Depends on the number of static branches in bs
history - First, Ill show a practical version
21The Algorithm Parameters and Variables
- GHL the global history length
- GHR a global history shift register
- GA a global array of previous branch addresses
- W an n m (GHL 1) array of small integers
22The Algorithm Making a Prediction
Weights are selected based on the current branch
and the ith most recent branch
23The Algorithm Training
24Why Its Better
- Forms a piecewise linear decision surface
- Each piece determined by the path to the
predicted branch - Can solve more problems than perceptron
Perceptron decision surface for XOR doesnt
classify all inputs correctly
Piecewise linear decision surface for
XOR classifies all inputs correctly
25Learning XOR
- From a program that computes XOR using if
statements
perceptron prediction
piecewise linear prediction
26A Generalization of Neural Predictors
- When m 1, the algorithm is exactly the
perceptron predictor - Wn,1,h1 holds n weights vectors
- When n 1, the algorithm is path-based neural
predictor - W1,m,h1 holds m weights vectors
- Can be pipelined to reduce latency
- The design space in between contains more
accurate predictors - If n is small, predictor can still be pipelined
to reduce latency
27Generalization Continued
Perceptron and path-based are the least accurate
extremes of piecewise linear branch prediction!
28Idealized Piecewise Linear Branch Prediction
- Presented at CBP workshop at MICRO 2004
- Hardware budget limited to 64K 256 bits
- No other limitations
- Get rid of n and m
- Allow 1st and 2nd dimensions of W to be unlimited
- Now branches cannot alias one another accuracy
much better - One small problem unlimited amount of storage
required - How to squeeze this into 65,792 bits for the
contest?
29Hashing
- 3 indices of W i, j, k, index arbitrary
numbers of weights - Hash them into 0..N-1 weights in an array of size
N - Collisions will cause aliasing, but more
uniformly distributed - Hash function uses three primes H1 H2 and H3
30More Tricks
- Weights are 7 bits, elements of GA are 8 bits
- Separate arrays for bias weights and correlating
weights - Using global and per-branch history
- An array of per-branch histories is kept, alloyed
with global history - Slightly bias the predictor toward not taken
- Dynamically adjust history length
- Based on an estimate of the number of static
branches - Extra weights
- Extra bias weights for each branch
- Extra correlating weights for more recent history
bits - Inverted bias weights that track the opposite of
the branch bias
31Parameters to the Algorithm
define NUM_WEIGHTS 8590 define NUM_BIASES
599 define INIT_GLOBAL_HISTORY_LENGTH 30 define
HIGH_GLOBAL_HISTORY_LENGTH 48 define
LOW_GLOBAL_HISTORY_LENGTH 18 define
INIT_LOCAL_HISTORY_LENGTH 4 define
HIGH_LOCAL_HISTORY_LENGTH 16 define
LOW_LOCAL_HISTORY_LENGTH 1 define
EXTRA_BIAS_LENGTH 6 define HIGH_EXTRA_BIAS_LENGTH
2 define LOW_EXTRA_BIAS_LENGTH 7 define
EXTRA_HISTORY_LENGTH 5 define HIGH_EXTRA_HISTORY_
LENGTH 7 define LOW_EXTRA_HISTORY_LENGTH
4 define INVERTED_BIAS_LENGTH 8 define
HIGH_INVERTED_BIAS_LENGTH 4 define
LOW_INVERTED_BIAS_LENGTH 9
define NUM_HISTORIES 55 define WEIGHT_WIDTH
7 define MAX_WEIGHT 63 define MIN_WEIGHT
-64 define INIT_THETA_UPPER 70 define
INIT_THETA_LOWER -70 define HIGH_THETA_UPPER
139 define HIGH_THETA_LOWER -136 define
LOW_THETA_UPPER 50 define LOW_THETA_LOWER
-46 define HASH_PRIME_1 511387U define
HASH_PRIME_2 660509U define HASH_PRIME_3
1289381U define TAKEN_THRESHOLD 3
All determined empirically with an ad hoc approach
32Per-Benchmark Accuracy
- I used several highly accurate predictors to
compete with my predictor - I measured the potential of my technique using an
unlimited hardware budget
33Scores for the 6 Finalists (out of 18 entries)
scores are in average MPKI (mispredicts per 1000
insts) (corrected) over a suite of 20 traces from
Intel
- Hongliang Gao, Huiyang Zhou 2.574
- André Seznec 2.627
- Gabriel Loh 2.700
- Daniel A. Jiménez 2.742
- Pierre Michaud 2.777
- Veerle Desmet et al. 2.807
5 of the 6 finalists used ideas from the
perceptron predictor in their entries
34References
- Jiménez and Lin, HPCA 2001 (perceptron predictor)
- Jiménez and Lin, TOCS 2002 (global/local
perceptron) - Jiménez, MICRO 2003 (path-based neural predictor)
- Jiménez, ISCA 2005 (piecewise linear branch
prediction) - Juan, Sanjeevan, Navarro, SIGARCH Comp. News,
1998 (dynamic history length fitting) - Skadron, Martonosi, Clark, PACT 2000 (alloyed
history)
35The End
36Extra Slides
- Following this slide are some slides cut from the
talk to fit within time constraints.
37Program to Compute XOR
int f () int a, b, x, i, s 0 for (i0
ilt100 i) a rand () 2 b rand ()
2 if (a) if (b) x 0 else x
1 else if (b) x
1 else x 0 if (x) s / this
is the branch / return s
38Example XOR continued
- Watch a perceptron try to learn XOR