Adapting RISC Processors For Hard RealTime - PowerPoint PPT Presentation

1 / 17
About This Presentation
Title:

Adapting RISC Processors For Hard RealTime

Description:

'consumer' RT vs military/aerospace/etc. need to reduce cost ... variable-cycle jet engines can explode if correct control inputs are not applied every 20-50 ms ... – PowerPoint PPT presentation

Number of Views:25
Avg rating:3.0/5.0
Slides: 18
Provided by: scie53
Category:

less

Transcript and Presenter's Notes

Title: Adapting RISC Processors For Hard RealTime


1
Adapting RISC ProcessorsFor Hard Real-Time
Charles C. Weems Associate Professor Department
of Computer Science University of
Massachusetts Amherst, MA 01003-4610 weems_at_cs.umas
s.edu
2
Why Real-Time RISC?
  • Wider range of applications than current RISC
  • Embedded control
  • Multimedia
  • Signal processing
  • Ability to use common architecture
  • Common interfaces
  • Common designs
  • Common software tools

3
Current Approach to RT
  • DSP chips
  • Falling behind in performance
  • Weak software tools
  • Hand scheduling
  • Inflexible systems - Fixed timing
  • Fixed memory map - Difficult to deal with
    unplanned events
  • Costly to build and modify
  • Changes often require major redesign

4
Future of Real-Time
  • Wider application
  • consumer RT vs military/aerospace/etc. need to
    reduce cost
  • Systems that dynamically schedule tasks to meet
    hard and soft deadlines
  • Use virtual memory
  • Increase flexibility
  • Leverage standard software tools vs hand tuning
  • Reduced development costs
  • Easier to maintain and upgrade

5
Supporting Technology
  • Languages in development
  • Timing Analysis in development
  • Scheduling Paradigms existing
  • Predictable Timing in Processors
  • Currently DSP, because simple and predictable
    designs
  • Or, use worst-case prediction of RISC
    performance, but suffer estimates that are an
    order of magnitude worse than necessary

6
Want RISC RT because...
  • Follow the technology curve
  • Compatible with existing consumer systems
  • Leverage existing software tools
  • However...
  • Timing variability in current RISCs forces
    pessimistic execution time estimates that
    significantly reduce effective RT performance

7
Sources of Variability
  • Intermediate storage
  • Caches, TLB, branch target buffers, writebuffers,
    etc.
  • Branch prediction
  • Exceptions and interrupts
  • Complex pipelines require variable amount of
    clean up
  • Data dependent instruction times
  • Multiply, divide, etc.
  • Complex pipeline interactions
  • Instruction re-ordering
  • Register renaming

8
Variability and Deadlines
  • Variability in execution time complicates
    designing systems to meet hard deadlines
  • Hard deadlines require that tasks meet specified
    timing constraints or a system failure results
  • Ex Advanced variable-cycle jet engines can
    explode if correct control inputs are not applied
    every 20-50 ms
  • Ex Missing deadlines in real-time video leads to
    unacceptable picture quality resulting in a
    failed product

9
Reducing Variability
  • Preliminary analysis shows ways of reducing
    variability with minimal impact on performance
  • Processing modes
  • Normal mode No performance penalty
  • Real-Time mode Performance impact depends on
    features activated.
  • Straightforward design enhancements for real-time
  • Low variability arithmetic units
  • Fast interrupt support
  • Zero overhead branching
  • But these address only minor sources of
    variability

10
Most Variability
  • Comes from Memory Subsystem
  • Cache (especially instruction cache)
  • Translation Lookaside Buffer
  • Unpredictable
  • Miss rates, especially on context switch
  • Access times (depending on level of hierarchy)
  • Working set disturbances due to exceptions and
    context switching

11
Simple Approaches
  • Partitioning
  • Allocate cache partitions to separate tasks
  • Miss rate estimates can be more accurate
  • Miss rate goes up because of small partitions
  • Saving State
  • Store cache contents on context switch
  • Avoids unpredictable inter-task cache effects
  • Adds a large amount of overhead

12
New Approach
  • Use higher-level code structure to better manage
    the cache
  • Aggressive prefetch, less demand-driven
  • More effective replacement policy
  • Integrate branch prediction and prefetch
  • Small partitions act like larger cache
  • Unknown hardware cost and overhead
  • Control Flow Graph Cache

13
Control Flow Graph Cache
  • Embed CFG in I-cache
  • Guides instruction
  • Sequencing
  • Cache management
  • Real-Time impact
  • Timing predictability
  • Reduced WCET

CFG
Program Counter
Management
Cache
TLB
Cache Load, Replacement
TLB Load, Replacement
14
CFG Cache
  • Combines into one coherent structure
  • Branch Target Buffer
  • Branch Prediction Hardware
  • Cache and TLB Management
  • Instruction lookahead via compiled CFG
  • CFG guides I-cache prefetch and replacement
  • Replaces branch prediction hardware

15
Instruction Sequences
Distribution of Valid Lookahead Sequences from
One Test Program (MPEG)
16
CFG Cache Benefits
  • Extended intelligent prefetching
  • Hide address translation and cache miss times
  • Allows emulation of full associativity in a
    direct-mapped cache
  • Facilitates dynamic partitioning of cache
  • No aliasing of branch targets, prediction
  • Increased predictability for Hard Real-Time

17
Outstanding Questions
  • Research in progress
  • Predictability of wider range of applications
  • Hardware cost and CFG download overhead
  • CFG evaluation time
  • Compaction of CFG form
  • Effectiveness with knotty CFGs
Write a Comment
User Comments (0)
About PowerShow.com