Title: Dusty Caches for Reference Counting Garbage Collection
1Dusty Caches for Reference Counting Garbage
Collection
- Scott Friedman, Praveen Krishnamurthy,
- Roger Chamberlain, Ron K. Cytron, Jason Fritts
- Dept. of Computer Science and Engineering
- Washington University in St. Louis
- Sponsored by NSF grant ITR-0313203
2Outline
- Introduction
- Reference counting garbage collection
- Dusty cache
- Design and policy
- Evaluating dusty caches
- Quantifying performance using JVM
- Experiment on Liquid Architecture
- Conclusions
3Introduction
- Garbage collection
- Java, C
- Unused objects are reclaimed
- Techniques
- Reference counting
- Mark and sweep
- Copy and collect
4Reference Counts
Integer
class members
P
1
reference count
5Reference Counts
Integer
class members
P
2
Q
reference count
6Reference Counts
- P new Integer(..)
- Q new Integer(..)
class members
P
1
class members
Q
1
7Reference Counts
class members
P
0
class members
Q
2
8Ref. counts Memory Traffic
- Short-lived (temporary) changes in reference
counts - Linked lists, trees, sorting, hashtables etc.
9Ref. counts Memory Traffic
- linkedList list
- iterator iter list.iterator()
- while(iter.hasNext())
- Object item iter.next()
- foo(item)
head
node 0
node 1
node 2
node n
1
1
1
1
10Ref. counts Memory Traffic
- linkedList list
- iterator iter list.iterator()
- while(iter.hasNext())
- Object item iter.next()
- foo(item)
head
node 0
node 1
node 2
node n
2
1
1
1
11Ref. counts Memory Traffic
- linkedList list
- iterator iter list.iterator()
- while(iter.hasNext())
- Object item iter.next()
- foo(item)
head
node 0
node 1
node 2
node n
1
2
1
1
12Ref. counts Memory Traffic
- linkedList list
- iterator iter list.iterator()
- while(iter.hasNext())
- Object item iter.next()
- foo(item)
head
node 0
node 1
node 2
node n
1
1
2
1
13Ref. counts Memory Traffic
- linkedList list
- iterator iter list.iterator()
- while(iter.hasNext())
- Object item iter.next()
- foo(item)
head
node 0
node 1
node 2
node n
1
1
1
2
14Ref. counts Memory Traffic
node 0
node 1
node 2
node n
head
1
1
1
1
- Consequence
- Every object is marked dirty in the cache
- Will be written back to RAM upon eviction
- Unnecessary writes to RAM
- Temporally silent stores (Lepak Lipasti, 2002)
- Our solution - Dusty Cache
- Eliminates all such unnecessary writes to memory
15Outline
- Introduction
- Reference counting
- Dusty caches
- Design and operations
- Evaluating dusty caches
- Quantifying performance using JVM
- Experiment on Liquid Architecture
- Conclusions
16Cache Design WriteBack
tag
ddata
x
x
Read update the tag, valid and data
Write update dirty and data
cache lines
cached value
address
tag
offset
valid
dirty
17Cache Design Dusty
dimage
tag
ddata
x
x
Read update tag, data and image
Writes update tag and data
cache lines
mem value
cached value
address
tag
offset
valid
dirty
ivalid
18Outline
- Introduction
- Reference counting
- Dusty cache
- Design and operations
- Evaluating dusty caches
- Quantifying performance using JVM
- Experiment on Liquid Architecture
- Conclusions
19Evaluating Dusty Cache
- Traces from JVM instrumentation
- Suns Java Virtual Machine
- Reference counting garbage collection
- Reference count for all objects
- of JVM instructions between reference count
changes - of cache altering instructions between changes
- Eviction from cache
- window for a window of size k cache writes, a
value v written at write i will be evicted at
write ik
20JVM Quantifying Memory Savings
- Cache configurations
- Unified, Non-unified caches
- WriteThrough, WriteBack, Dusty
- Measured as memory writes saved
- Baseline was a WriteThrough cache
211. SPEC DataBase
8,088 objs
222. SPEC JESS
46,129 objs
23Experiments on a flexible processor
- Liquid Architecture
- Platform to study microarchitecture refinements
- Softcore LEON2 5-stage SPARC compliant 32-bit
- Statsmodnon intrusive performance measurement
tool - Limitations
- 4 MB of SRAM, No JVM
24Experiment on Liquid Architecture
- Deployed LEON with a dusty cache policy
- Replicated all of the L1 cache
- Monte Carlo Experiment
- Determine probability of cache altering events
- Reads, Writes, HeapRefCount, HeapRefCount--
- Events access random addresses in memory
- Experiments monitored both RC and non-RC traffic
25Emulation using Liquid Architecture
- Results of the Monte Carlo Experiment
50 reduction
26Conclusions
- Dusty cache policy effectively filters the
unnecessary reference counting traffic - Savings around 5 - 25 over WB cache
- Microarchitecture optimization
- Dusty cache can be a small subset of the cache
- Could potentially reduce writes even in
non-reference counting traffic - Inferred from Characterization of silent stores
(Bell et al. 2000)
27Liquid Architecture Project
- http//www.arl.wustl.edu/liquid
- Email liquid_at_cs.wustl.edu
- Sponsored by NSF grant ITR-0313203