Chapter 5 Microprocessor Caches - PowerPoint PPT Presentation

1 / 24
About This Presentation
Title:

Chapter 5 Microprocessor Caches

Description:

Design and System Performance. 4. Cache Configuration. External caches. Instruction. Data ... removed at any time. Simpler cache (support only one addr. & bus) ... – PowerPoint PPT presentation

Number of Views:78
Avg rating:3.0/5.0
Slides: 25
Provided by: ET358
Category:

less

Transcript and Presenter's Notes

Title: Chapter 5 Microprocessor Caches


1
Chapter 5 Microprocessor Caches
  • Introduction
  • System issues
  • Cache organization
  • Cache update policies
  • Example microprocessor caches
  • Cache coherency issues
  • Cache design and system performance

2
Introduction
  • Cache
  • A fast RAM-type memory
  • Between relatively fast CPU and slower main
    memory
  • Cache hit vs. cache miss hit rate
  • Caches improve overall system performance by
  • Providing information to the ?P faster than main
    memory ( improve read cycle )
  • Receiving information from the ?P at a fast rate
    and allowing it to continue processing while
    the cache independently processes the write to
    main memory ( improve write cycle )
  • Improving memory bus utilization by reducing the
    traffic on the memory bus ( improve bus
    utilization )

3
Chap. 5 Microprocessor Caches
  • Introduction
  • System Issues
  • Configuration
  • External caches
  • Virtual and Physical Caches
  • Look-through and Look-aside
  • Multi-level Caches
  • Write Policies
  • Organization
  • Update Policies
  • Coherency Issues
  • Design and System Performance

4
Cache Configuration
  • External caches
  • Instruction
  • Data
  • Unified (both instructions and data)
  • Virtual and Physical Caches
  • Look-through and Look-aside
  • Multi-level Caches

5
Unified External Cache
  • Intel 80386
  • Unified
  • External

6
Separate External Cache
  • MIPSR2000/R3000
  • On-chipcachecontroller
  • ExternalData CacheusingstandardSRAM

7
Separate External Cache w. MMU
  • Motorola88100 RISC
  • Two separateexternal buses
  • data P bus
  • instruction P bus
  • virtual addressesfor external CMMU
  • MMU 16Kup to 8x

8
On-Chip Caches
  • Motorola 680x0Intel 80486, i860MIPS R4000
  • 80486 unifiedothers split-cache
  • Managed by hardware
  • Fast, onlysystem codecan accessin supervisor
    mode

9
Virtual and Physical Caches
  • Physical (real) caches
  • Receive physical addresses
  • From MMU, done virtual-to-physical conversion
  • Applied to on-chip caches
  • Ex 68040, 80486 (on-chip)
  • Virtual (logical) caches
  • Receive unconverted virtual addresses
  • Ex 68020/30, i860, MIPS R4000 (on-chip)

Synonyms/virtual addr.aliases
10
Look-through vs. Look-aside
  • Look-through Serial read
  • Look-aside Parallel read

11
Look-through operation flowchart
12
Look-through time delay
  • Miss access cache access main
    memory access
  • System concurrency
  • Separate buses decouples the ?P from the rest of
    the system
  • ?P can operate out of its cache while other
    master modules use the memory bus to access main
    memory

13
Disadvantages of Look-through
  • Cache miss
  • Lookup penalty
  • total access time cache access main memory
    access
  • while M.M. is being accessed, the execution flow
    of the processor stops
  • Two buses
  • ?P local bus and memory (system) bus
  • Makes the cache more complex
  • Implement dual-ported memory to allow concurrent
    operations on the local and system buses

14
Single-Processor look-through
15
Dual-Processor look-through
16
Look-aside operation flowchart
17
Look-aside time delays
  • Cache and main memory accesssimultaneously
  • Miss access time Main memory access
  • Cache hit
  • Data ? ?P at cache rates
  • Cache controller ? abort main memory access

18
Look-aside architecture
  • Advantages
  • No lookup penalty
  • Optional and can be removed at any time
  • Simpler cache (support only one addr. bus)
  • Disadvantages
  • Memory bus and memory subsystem are kept busy
    (both of hit and miss)
  • System concurrency is reduced

19
Multi-level Caches
  • First level caches are usually smaller (464K)
  • Ex the external cache used with the 80386
  • Second level caches
  • May be external cache used a ?P with on-chip
    cache
  • Typically include all the instructions/data of
    the 1st level cache and more
  • Larger than 1st level (64 512K)
  • Ex 80486 with external 485 Turbocache chip
    (cache controller SRAM cache)

20
Chap. 5 Microprocessor Caches
  • Introduction
  • System Issues
  • Configuration
  • Write Policies
  • Write-through
  • Buffered write-through
  • Write-back (or copy-back)
  • Organization
  • Update Policies
  • Coherency Issues
  • Design and System Performance

21
Write-through
  • When should main memory be updated?
  • Write-through
  • All memory writes are sent to main memory
  • Maintains cache coherency
  • Any other bus master device have a simpler access
    (read main memory directly)
  • Generates more traffic ?P ? ? main memory

22
80486 2nd-level implementation
  • PWT 1 write-through on 2nd-level which is
    write back
  • PCD 1 internal cache disable 2nd-level
    must also be (case of
    shared or non-cacheable data) 0
    allows caching of data from that page
    (case of non-shared or cacheable
    data)

23
Buffered write-through
  • Write operation of the CPU (1 clock cycle)write
    to main memory (several clock cycles)
  • W/O buffers, a write following another would have
    to wait

24
Write-back (copy-back)
  • Cache hit during memory write
  • Cache controller writes to cache
  • Cache is flagged with a dirty or modified bit
  • Data will be sent to main memory later
  • replaced because of a cache miss
  • bus not use
  • CPU performs context switching
  • Most complex policy to implement
  • Reduce bus traffic eliminate the bottleneck of
    waiting for each memory write
Write a Comment
User Comments (0)
About PowerShow.com