On-Chip Cache Analysis - PowerPoint PPT Presentation

1 / 28
About This Presentation
Title:

On-Chip Cache Analysis

Description:

Field Programmable Gate Array (FPGAs) Verilog HDL. System-on-Chip (SoC) ... Big-Endian, Von-Numen Architecture. Sixteen 16-bit registers. Forty Two Instructions ... – PowerPoint PPT presentation

Number of Views:34
Avg rating:3.0/5.0
Slides: 29
Provided by: ITSS5
Category:
Tags: analysis | cache | chip | verilog

less

Transcript and Presenter's Notes

Title: On-Chip Cache Analysis


1
On-Chip Cache Analysis
  • A Parameterized Cache Implementation for a
    System-on-Chip RISC CPU

2
Presentation Outline
  • Informal Introduction
  • Underpin Design xr16
  • Cache Design Issue
  • Implementation Details
  • Results Conclusion
  • Future Work
  • Questions

3
Informal Introduction
  • Field Programmable Gate Array (FPGAs)
  • Verilog HDL
  • System-on-Chip (SoC)
  • Reduced Instruction Set Computer (RISC)
  • Caches
  • Project Theme

4
Underpin Design xr16
  • Classical pipelined RISC
  • Big-Endian, Von-Numen Architecture
  • Sixteen 16-bit registers
  • Forty Two Instructions (16-bit)
  • Result Forwarding, Branch Annulments, Interlocked
    instructions

5
Underpin Design xr16 (contd)
  • Internal and external Buses (CPU clocked)
  • Pipelined Memory Interface
  • Single-cycle read, 3-cycle write
  • DMA and Interrupt Handling Support
  • Ported Compiler and Assembler

6
Underpin Design xr16 (contd)
  • Block Diagram

7
Underpin Design xr16 (contd)
  • Datapath

8
Underpin Design xr16 (contd)
  • Memory Preferences

9
Underpin Design xr16 (contd)
  • RAM Interface

10
Cache Design Issues
  • Cache Size
  • Line Size
  • Fetch Algorithm
  • Placement Policy
  • Replacement Policy
  • Split vs. Unified Cache

11
Cache Design Issues (contd)
  • Write Back Strategy
  • Write Allocate Policy
  • Blocking vs. Non-Blocking
  • Pipelined Transactions
  • Virtually addressed Caches
  • Multilevel Caches

12
Cache Design Issues (contd)
Cache Size 32 256K Data Bits
Placement Policy Direct Mapped, Set Associative, Fully Associative
Replacement Policy FIFO, Random
Write Back Strategy Write Back, Write Through
Write Allocate Policy Write Allocate, Write No Allocate
13
Implementation Details
  • Configurable Parameters
  • Cache Size
  • Placement Strategy
  • Write Back Policy
  • Write Allocate Policy
  • Replacement Policy

14
Implementation Details (contd)
15
Implementation Details (contd)
16
Implementation Details (contd)
1. Miss ? Read ? Replacement NOT Required Let the
memory operation complete and place fetched data
from memory in cache.
17
Implementation Details (contd)
2. Miss ? Read ? Replacement Required Initiate a
write memory operation and write back the set to
be replaced. Initiate read operation for desired
data.
18
Implementation Details (contd)
3. Miss ? Write ? No Allocate Let the memory
operation complete and do nothing else.
19
Implementation Details (contd)
4. Miss ? Write ? Yes Allocate ? WriteThrough Let
the memory operation complete and place the
new data in cache.
20
Implementation Details (contd)
5. Miss ? Write ? Yes Allocate ? WriteBack ?
Replacement NOT Required Cancel memory operation
and only update the cache, mark the data dirty.
21
Implementation Details (contd)
6. Miss ? Write ? Yes Allocate ? WriteBack ?
Replacement Required Instead of writing the data
that caused the write miss, write back the set
that is to be replaced and update the cache with
data that caused the miss.
22
Implementation Details (contd)
7. Hit ? Read Cancel memory operation and provide
data for either instruction fetch or data load
instruction.
23
Implementation Details (contd)
8. Hit ? Write ? WriteThrough Let the memory
operation complete and update the cache when
memory operation completes.
24
Implementation Details (contd)
9. Hit ? Write ? WriteBack Cancel the memory
operation and update the cache.
25
Implementation Details (contd)
  1. Read Hit
  2. Write Hit
  3. Read Miss (rep)
  4. Read Miss (no rep)
  5. Write Miss (rep)
  6. Write Miss (no rep)

26
Results Conclusion
  • Proof of Concept
  • Rigid Design Parameters
  • RD Options
  • Architecture Innovation

27
Future Work
  • LRU Implementation
  • Victim Cache Buffer
  • Split Caches
  • Level 2 Cache
  • Pipeline Enrichment
  • Multiprocessor Support

28
Questions
Write a Comment
User Comments (0)
About PowerShow.com