Title: But...
1Mommy, mommy! I want a hardware cache with few
conflicts and low power consumption that is easy
to implement!
But... Thats three wishes in one!!!
2Refinement and Evaluation of theElbow
CacheorThe Little Cache that could
32-way Set Associative Cache
A
A
B
B
E
E
C
C
D
D
F
F
H
G
G
H
Memory References A
Memory References A-B
Memory References A-B-C
Memory References
Memory References A-B-C-D
Memory References A-B-C-D-E
Memory References A-B-C-D-E-F
Memory References A-B-C-D-E-F-G
Memory References A-B-C-D-E-F-G-H
4Conflicts (cont.)
- Traditional way of reducing conflicts is to use
set associative caches. - Lower miss rate (than direct-mapped)
- -- Slower access
- -- More complexity (uses more chip-area)
- -- Higher power consumption
52-way Skewed Associative Cache
A
B
Cache Bank 1
E
C
A
D
C
F
F
G
H
G
Cache Bank 2
H
D
B
E
Memory References A
Memory References A-B
Memory References A-B-C
Memory References
Memory References A-B-C-D
Memory References A-B-C-D-E
Memory References A-B-C-D-E-F
Memory References A-B-C-D-E-F-G
Memory References A-B-C-D-E-F-G-H
62-way Skewed Associative Cache
A
B
Cache Bank 1
E
C
A
D
C
F
F
G
H
G
H
No Conflicts!
Cache Bank 2
H
D
B
E
Memory References A
Memory References A-B
Memory References A-B-C
Memory References
Memory References A-B-C-D
Memory References A-B-C-D-E
Memory References A-B-C-D-E-F
Memory References A-B-C-D-E-F-G
Memory References A-B-C-D-E-F-G-H
7Skewed associative caches
- Uses different hashing (skewing) functions for
indexing each cache bank - Lower missrate (than set-assoc.)
- More predictable
- -- Slightly slower (hashing)
- -- Cannot use LRU replacement
- -- Cannot use VI-PT
8Elbow Cache
- Improve the performance of a skewed associative
cache by reallocating blocks within the cache. - By doing so we get a broader choice of which
block to choose as the victim. - Use timestamps as replacement metric.
9Finding the victim
- Two methods
- Look-aheadConsider all possible placements
before the first reallocation is made. - FeedbackOnly consider the immediate placements,
then iterate.
102-way Elbow Lookahead Cache
A
B
Cache Bank 1
E
C
A
D
D
F
F
X
X
G
H
G
Replacement paths F-B-A E-D-H
Cache Bank 2
H
C
B
E
Memory References A
Memory References A-B
Memory References A-B-C
Memory References
Memory References A-B-C-D
Memory References A-B-C-D-E
Memory References A-B-C-D-E-F
Memory References A-B-C-D-E-F-G
Memory References A-B-C-D-E-F-G-H-X
112-way Elbow Feedback Cache
A
B
Cache Bank 1
E
C
A
D
D
F
F
X
G
H
G
Temp. Register
Cache Bank 2
H
C
B
E
X
Memory References A
Memory References A-B
Memory References A-B-C
Memory References
Memory References A-B-C-D
Memory References A-B-C-D-E
Memory References A-B-C-D-E-F
Memory References A-B-C-D-E-F-G
Memory References A-B-C-D-E-F-G-H-X
12Finding the victim (cont.)
- Look-ahead
- Most optimal
- -- Difficult to implement (gt1 transformation)
- Feedback
- Easy to implement (feed victim back to write
buffer) - -- Needs extra space in the write buffer
13Replacement Metrics
- Enhanced-Not-Recently-Used (NRUE)
- The best policy for skewed caches known so far.
- Each block contains two extra bits, a
recently-used and very-recently-used bit, that
are set on access to the block. - These bits are regularly cleared. The
very-recently-used bit is cleared more often. - First, try to find a victim with no bit set.
- Then one with only the recently-used bit set.
- Then use random replacement.
14Timestamps
Increase counter on every cache allocation
TA
A
10100 100000
10100 100001
10100 100010
Counter
Tcurr
TB
B
Timestamp
Data
Tcurr TA if Tcurr gt TA
Dist(A)
Tmax Tcurr TA if Tcurr lt TA
15Timestamps
Timestamp ticks
Tmax
0
Dist(A) gt Dist(B) A older than B
Dist(A) lt Dist(B) B older than A
16Implementation
- Lookahead
- At most one transformation (4 possible victims)
each replacement. - Do the transformation and load the new data at
the same time.
17Implementation
- Feedback
- Up to 7 transformations (max. 8 possible victims)
each replacement. - Temporary victims are moved to the write buffer,
before reallocation. - Extra control field in write buffer.
18Feedback
Y
writemem
21
21
Xid1
Xid2
b Step DataTag TmSt
DataTag TmSt
DataTag TmSt
ATmSt
BTmSt
Read
X
Write
Write Buffer
CTmSt
Bank I
Bank II
j
1
s
i
N
v
22
k
b
readmem
X
19Test Configurations
- Set associative 2-way, 4-way, 8-way, 16-way
- Fully associative cache
- Skewed associative, LRU
- Skewed associative, NRUE
- Skewed associative, 5-bit timestamp
- Elbow cache, 1-step lookahead, 5-bit timestamp
- Elbow cache, 7-step feedback, 5-bit timestamp
20Test Configurations (2)
- General configuration
- 8 KB, 16 KB, 32 KB cache size
- L1 data cache with 32 byte block size
- Write Back No Allocate on Write infinite
write buffer (all writes ignored)
Miss Rate Reduction (MRR) MRR (MRref
MR)/MRref
21 22 23Conclusions
- For a 2-way skewed cache, timestamp replacement
gives almost the same performance as LRU. - Timestamps are useful.
- A 2-way elbow cache has roughly the same
performance as an 8-way set associative cache of
the same size.
24Conclusions (2)
- The lookahead design is slightly better than the
feedback. - There are drawbacks with all skewed caches
(skewing delays, VI-PT). - If the problems can be solved, the elbow cache is
a good alternative to set associative caches.
25Future Work
- Power awareness
- How does an elbow cache stand up against
traditional set associative caches when power
consumptions is considered?
26Links
- UART web
- www.it.uu.se/research/group/uart/
27?