Introduction to Victim Cache and Prefetch Buffers - PowerPoint PPT Presentation

1 / 35
About This Presentation
Title:

Introduction to Victim Cache and Prefetch Buffers

Description:

Miss will affect cache's performance. 7. Baseline Design. 8 ... The miss penalties are 24 instruction times for first level, while 320 for second level. ... – PowerPoint PPT presentation

Number of Views:215
Avg rating:3.0/5.0
Slides: 36
Provided by: csN9
Category:

less

Transcript and Presenter's Notes

Title: Introduction to Victim Cache and Prefetch Buffers


1
Introduction to Victim Cache and Prefetch Buffers
  • Norman P. Jouppi
  • Digital Equipment Corporation Western Research
    Lab

2
Outline
  • Why?
  • History of the problem
  • Solution offered here whether its Successful?
  • Conclusion
  • Questions

3
Why?
  • 3000
  • 2000
  • 1000
  • 100
  • 10
  • 0
  • 80 84 88 92 96 2000

4
History
  • hierarchic memory system

Mn
M2
CPU
M1
5
History
  • Two most common hierarchic memory system.

CPU
Cache
Main Mem
CPU
Main Mem
Disk
6
Current Caches Performance
  • Miss will affect caches performance.

7
Baseline Design
8
Performance of the baseline system
  • Test environment
  • 1000 MIPS peak instruction issue rate
  • 4KB first level instr. and data cache with 16B
    lines
  • 1MB second level cache with 128B lines
  • The miss penalties are 24 instruction times for
    first level, while 320 for second level.

9
Performance of the baseline system
  • Test program characteristics

10
Performance of the baseline system
  • First level cache miss rate

11
Performance of the baseline system
  • First level
  • cache miss
  • rate in graph

12
How to improve Cachesperformance?
  • Four categories of cache miss
  • Conflict miss
  • Compulsory miss
  • Capacity miss
  • Coherence miss

13
How to improve Cachesperformance?
--reducing conflict miss
  • Conflict miss example

14
How to improve Cachesperformance? --reducing
conflict miss
  • Miss caching
  • Victim caching

15
reducing conflict miss --miss caching
  • The position of missing cache

Prefetch Buffer
Cache L1
Cache L2
Missing Cache
16
reducing conflict miss --miss caching
  • Miss cache organization

17
reducing conflict miss --miss caching
  • Miss cache performance

18
reducing conflict miss --victim caching
  • Weak point of miss cache
  • Duplication of information in miss cache and
    level 1 cache.
  • How to improve?
  • victim cache keep the data line thrown away
    by level 1 cache.

19
reducing conflict miss --victim caching
  • Organization of victim cache

20
reducing conflict miss --victim caching
  • Performance of victim cache
  • Consult the example given on page 368.
  • While in the extreme case, miss cache is of no
    use, victim cache will help to some degree.

21
reducing conflict miss --victim caching
  • Performance of victim cache

22
reducing conflict miss --victim caching
  • The effect of L1 cache size on VCs performance

23
reducing conflict miss --victim caching
  • The effect of L1 line size on VCs performance

24
reducing conflict miss --victim caching
  • Victim caches and L2 caches
  • With the increase of L1 cache size, larger
    percentage of misses is due to conflict and
    compulsory.
  • Since victim cache works better with larger line
    size, it will help L2 cache
  • Why victim cache violation of inclusion is
    allowed?

25
How to improve Cachesperformance? --reducing
compulsory and capacity miss
  • Compulsory miss due to first reference to data.
  • Capacity miss due to caches small capacity.
  • How to reduce?
  • --prefecth techniques such as longer line size or
    prefecthing methods.

26
reducing compulsory and capacity miss
  • Three prefetch techniques
  • Prefetch always
  • Prefetch on miss
  • Tagged prefetch

27
reducing compulsory and capacity miss
  • Three prefetch techniques peformance

28
reducing compulsory and capacity miss
--stream buffer
  • Sequential stream buffer design

29
reducing compulsory and capacity miss
--stream buffer
  • Sequential stream buffer performance
  • four-entry
  • instruction
  • stream buffer
  • with 4KB
  • instruction
  • buffer and data
  • buffer, each
  • with 16B lines.

30
reducing compulsory and capacity miss
--multi-way stream buffer
  • Four-way stream buffer design

31
reducing compulsory and capacity miss
--multi-way stream buffer
  • Four-way stream buffer performance

32
reducing compulsory and capacity miss
--multi-way stream buffer
  • 16B line size

33
reducing compulsory and capacity miss
--multi-way stream buffer
  • 4KB cache

34
Conclusion
  • Performance of the base system with
  • 4 entry data victim cache
  • An instruction stream buffer
  • 4 way data stream buffer
  • Base system on chip 4KB instruction cache and
    4KB data cache and 24 cycle miss penalty, 16B
    line size L2 cache 1MN, 128B line size, 320 miss
    penalty.

35
Conclusion
  • Performance of
  • improved system
Write a Comment
User Comments (0)
About PowerShow.com