Exploiting Memory Hierarchy Chapter 7 - PowerPoint PPT Presentation

About This Presentation
Title:

Exploiting Memory Hierarchy Chapter 7

Description:

Direct Mapped Cache: the Idea. Page 2. Cache. Main. Memory. All addresses ... that handles writes by updating values only to the block in the cache, then ... – PowerPoint PPT presentation

Number of Views:37
Avg rating:3.0/5.0
Slides: 11
Provided by: kumarm8
Learn more at: https://cse.buffalo.edu
Category:

less

Transcript and Presenter's Notes

Title: Exploiting Memory Hierarchy Chapter 7


1
Exploiting Memory HierarchyChapter 7
  • B.Ramamurthy

2
Direct Mapped Cache the Idea
Cache
Main Memory
All addresses with LSB 001 will map to purple
cache slot All addresses with LSB 101 will map to
blue cache slot And so on
3
Cache Organization
  • Content addressable memory
  • Fully associative
  • Set associative Fig.7.7

Cache Memory Organization Address
Cache Memory Organization Data
Regular Memory Organization
4
Multi-word Cache Block
Ordinary Memory
word
address
Block selection
Data Block
word
5
Address ? Cache block
  • Floor (Address/bytes per block)?block in main
    memory
  • (Block in memory blocks in cache)?cache block
  • Example
  • Floor(457/4) ? 114
  • 1148 ? 2

6
Handling Cache Misses
  • Send original PC value to memory
  • Perform read on main memory
  • Write cache entry, putting the data from memory
    in data portion of the entry, write upper bits
    into tag, turn valid bit on.
  • Restart the missed instruction.

7
Handling Writes
  • Write through A scheme in which writes always
    update both the cache and the memory, ensuring
    that data is always consistent between two.
  • Write-back A scheme that handles writes by
    updating values only to the block in the cache,
    then writing the modified block to the main
    memory.

8
Example
  • SPEC2000
  • CPI 1.0 with no misses
  • Each miss incurs 100 extra cycles miss occurs
    10 of the times.
  • Average CPI 1 100X0.1 1 10 11 cycles
    (not good!)

9
An Example Cache The Intrinsity FastMath
processor
  • 12-stage pipeline
  • When operating on peak speed, the processor can
    request both an instruction and a data word on
    every clock.
  • Separate instruction and data cache are used.
  • Each cache is 16KB or 4K words with 16-word
    blocks.

10
Fig7.9
  • 256 blocks with 16 words per block.
Write a Comment
User Comments (0)
About PowerShow.com