April 17 - PowerPoint PPT Presentation

About This Presentation
Title:

April 17

Description:

stall cycles = # of instructions ' miss ratio ' miss penalty. Two ways of improving performance: ... huge miss penalty, thus pages should be fairly large (e.g., 4KB) ... – PowerPoint PPT presentation

Number of Views:31
Avg rating:3.0/5.0
Slides: 16
Provided by: gary290
Learn more at: https://www.cs.unc.edu
Category:
Tags: april | penalty

less

Transcript and Presenter's Notes

Title: April 17


1
April 17
  • 5 classes to go!
  • Read 7.3-7.5
  • Section 7.5 especially important!
  • New Assignment on the web

2
Cache Block Size and Hit Rate
  • Increasing the block size tends to decrease miss
    rate
  • Use split caches because there is more spatial
    locality in code

3
Cache Performance
  • Simplified model execution time (execution
    cycles stall cycles) cycle time stall
    cycles of instructions miss ratio miss
    penalty
  • Two ways of improving performance
  • decreasing the miss ratio
  • decreasing the miss penalty
  • What happens if we increase block size?

4
Associative Cache
  • Compared to direct mapped, give a series of
    references that
  • results in a lower miss ratio using a 2-way set
    associative cache
  • results in a higher miss ratio using a 2-way set
    associative cache
  • assuming we use the least recently used
    replacement strategy

5
Associative Performance
6
Multilevel Caches
  • We can reduce the miss penalty with a 2nd level
    cache
  • Add a second level cache
  • often primary cache is on the same chip as the
    processor
  • use SRAMs to add another cache above primary
    memory (DRAM)
  • miss penalty goes down if data is in 2nd level
    cache
  • Example
  • Base CPI1.0 on a 500Mhz machine with a 5 miss
    rate, 200ns DRAM access
  • Adding 2nd level cache with 20ns access time
    decreases miss rate to 2
  • Using multilevel caches
  • try and optimize the hit time on the 1st level
    cache
  • try and optimize the miss rate on the 2nd level
    cache

7
Virtual Memory
  • Main memory is a CACHE for disk
  • Advantages
  • illusion of having more physical memory
  • program relocation
  • protection

8
Pages Virtual Memory Blocks
  • Page faults the data is not in memory, retrieve
    it from disk
  • huge miss penalty, thus pages should be fairly
    large (e.g., 4KB)
  • reducing page faults is important (LRU is worth
    the price)
  • can handle the faults in software instead of
    hardware
  • using write-through is too expensive so we use
    writeback

9
Page Tables
10
Page Tables
One page table per process!
11
Where are the page tables?
  • Page tables are potentially BIG
  • 4kB page, 4MB program, 1k page table entries per
    program!
  • Powerpoint 14MB
  • Acrobat Distiller 13MB
  • Acrobat 8MB
  • MailCall 8MB
  • HacktiveDesktop 5MB (5 copies!)
  • iCalMinder 4MB
  • iCal 4MB
  • Mulberry 2MB
  • 33 More Processes!
  • May have to page the page tables!
  • Have to look up EVERY address!

12
Making Address Translation Fast
TLB Translation Lookaside Buffera CACHE for
address translation
13
What is in the page table?
  • Address upper bits of physical memory address
    OR disk address of page if not in memory
  • Valid bit, set if page is in memory
  • Use bit, set when page is accessed
  • Protection bit (or bits) to specify access
    permissions
  • Dirty bit, set if page has been written

14
Integrating TLB and Cache
15
4
Write a Comment
User Comments (0)
About PowerShow.com