RealTime Spring - PowerPoint PPT Presentation

1 / 46
About This Presentation
Title:

RealTime Spring

Description:

NUMA. IO-front end emulated by Unix processor. Target Hardware. 01/09/2003. Real-Time Spring ... NUMA. special HW-design. Target Hardware. 1.2 Spring Overview ... – PowerPoint PPT presentation

Number of Views:35
Avg rating:3.0/5.0
Slides: 47
Provided by: admi1328
Category:
Tags: realtime | numa | spring

less

Transcript and Presenter's Notes

Title: RealTime Spring


1
Real-Time Spring
  • Advanced Systems (Seminar)
  • by Marcel Modes

2
Overview
  • 0. Introduction
  • 1. System Overview
  • 1.1 Target Hardware
  • 1.2 Spring Overview
  • 2. Two problems in detail
  • 2.1 Predictable Memory Management
  • 2.2 Predictable IPC
  • 3. Conclusions

3
0. Introduction
4
Real-Time Spring
Introduction
  • University of Massachusetts Amherst
  • Project Directors
  • Prof. K. Ramamritham
  • Prof. J.A. Stankovic
  • http//www-ccs.cs.umass.edu/rts/spring.html
  • Papers from 1992-1995

5
Examples for Hard Real Time Systems
Introduction
  • Nuclear power plants
  • Space stations
  • Process control application

6
Conventional vs. RT-Systems
Introduction
  • Conventional Operating Systems
  • Average case performance
  • Many unpredictable situations
  • Ignore problems caused by underlying hardware
  • Hard Real Time Systems
  • The system as a whole has to be predictable

7
1.1 Target Hardware
8
Hardware Overview
Target Hardware
  • 68020CPU, 68851MMU, 4MB local Mem
  • single board microcomputers
  • global access to the memory on each processor
    board
  • NUMA
  • IO-front end emulated by Unix processor

9
SCRAM Net
Target Hardware
10
Hardware Overview
Target Hardware
11
Hardware Spring Net
Target Hardware
  • System and Application Processors, IO-Subsys.,
    and globally replicated memory on each node

12
Spring Scheduling Co-processor
Target Hardware
  • custom VLSI chip
  • executes the spring scheduling algorithm
  • performance improvement of at least 3 orders

13
What you should keep in mind!
Target Hardware
  • kind of cluster
  • NUMA
  • special HW-design

14
1.2 Spring Overview
15
Resources
Spring Overview
  • abstraction that can be used to represent CPU,
    IO-devices, data structures, files, etc.
  • R, W or RW access protection
  • attached to users or system process, or both

16
Resources
Spring Overview
  • shared by multiple tasks (concurrently)
  • might be assigned to some task
  • assigned exclusively to one task at a time

? Resources play an important role in Springs
approach to scheduling
17
Processes
Spring Overview
  • single thread of control in an independent
    address space
  • communicate with synchronous and asynchronous
    messages

18
Memory Management
Spring Overview
  • preallocation
  • ? no page fault during execution of an user
    application
  • memory pending is an essential underlying design
    choice

19
Memory Segments
Spring Overview
  • (shared) memory objects
  • exist independently of processes
  • defined at compile time (SDL)
  • created during System initialisation time
  • shared by processes if required

20
Scheduling
Spring Overview
  • guaranteed schedule
  • based on EDF
  • non preemtive
  • processes are broken into tasks (episodes) with
    pre-scheduled start and deadlines
  • also implemented in HW

21
System Description Language (SDL)
Spring Overview
  • component of the overall Spring system
  • describes target architecture
  • combining abstraction and detailed timing
    analysis
  • aid in linking and loading the system

22
System Description Language (SDL) 2
Spring Overview
  • Describes all information required to
  • Write
  • Compile
  • Load
  • Execute
  • Real-time application and system code.

23
Processes Creation and Management
Spring Overview
  • well described (SDL)
  • ? statically allocation
  • analyse of timing properties and resource use
  • ? Predictability
  • activation at process initialization time
  • no equivalent to traditional forking

24
What you should keep in mind!
Spring Overview
  • pages are preallocated
  • non preemtive scheduling
  • SDL describes target HW
  • predictability

25
2.1 Predictable Memory Management
26
TLBs
Predictable Memory Management
  • for WCET we would have to assume TLB miss (and
    page fault delay) for every memory reference
    (very pessimistic)
  • TLB hits must be predictable

27
TLBs
Predictable Memory Management
  • 1.) preallocation, at process creation time,
    physical page for every used page in a programs
    address space
  • 2.) TLB content is managed explicitly
  • ? ensures TLB hits

28
TLBs
Predictable Memory Management
  • Problems
  • Programs size is limited by the number of pages
    the system can spare for the process. (no working
    set strategy)
  • The number of pages a process can use is limited
    by the number of TLB entries available.

29
TLBs Short Example
Predictable Memory Management
  • Lets assume 8K Pages
  • We have 64 TLB-Entries in M68851 MMU
  • System may need about 40!
  • ? 24 Pages 192K for Application

30
TLBs
Predictable Memory Management
31
Caches
Predictable Memory Management
  • WCET ??
  • predict hit-rate
  • Instruction cache is good predictable

32
Desirable HW Features
Predictable Memory Management
  • Segmentation instead of paging
  • TLB should provide flushing all of ones process
    entries and loading and flushing of individual
    entries

33
What you should keep in mind!
Predictable Memory Management
  • WCET
  • Preallocation, explicit TLB Management
  • TLB limits program size

34
2.2 Predictable IPC
35
Problems with Traditional IPC
Predictable IPC
  • Network HW and SW is designed around improving
    average case performance
  • pathological scenarios
  • many transport protocols have no predictable
    transmission time

36
The Spring IPC System
Predictable IPC
  • The IPC System provides predictable and bounded
    communication in a distributed hard real-time
    environment.
  • ports are used
  • unidirectional
  • no direct task communication
  • only one receiver

37
Messages
Predictable IPC
  • fixed size
  • Strict copy-by-value semantic
  • have deadlines

38
asynchronous communication
Predictable IPC
  • time needed is added to the senders and
    receivers worst-case execution time
  • there is no blocking involved

39
synchronous communication
Predictable IPC
  • conventionally implies that a task can be
    blocked for an indefinite period But!
  • This is not allowed!
  • scheduling of the sending task is coordinated
    with the receiving task
  • ? advantage from a priori mapping of processes to
    tasks

40
synchronous communication (2)
Predictable IPC
41
What you should keep in mind!
Predictable IPC
  • we have to guarantee transmission time
  • no blocking for indefinit period

42
3. Conclusion
43
Last research on
Conclusion
  • other multiprocessor
  • RISC architecture
  • distributed system architectures
  • Other networks
  • caching issues
  • hardware-software co-design

44
Spring Product List
Conclusion
45
Conclusion
  • very special
  • many good concepts can not be applied
  • difficult to use SDL

46
End of Presentation
  • Thanks for listening!
  • Any Questions?
Write a Comment
User Comments (0)
About PowerShow.com