Real-Time Memory Management - PowerPoint PPT Presentation

1 / 17
About This Presentation
Title:

Real-Time Memory Management

Description:

Dynamic memory management of any kind in real-time, though usually ... Compaction is a CPU intensive process and is not recommended in hard real-time systems. ... – PowerPoint PPT presentation

Number of Views:104
Avg rating:3.0/5.0
Slides: 18
Provided by: faru
Category:

less

Transcript and Presenter's Notes

Title: Real-Time Memory Management


1
Real-Time Memory Management
  • Dynamic memory management of any kind in
    real-time, though usually necessary, is
    detrimental to real-time performance and
    schedualability analysis.
  • Stacks are typically used in foreground/background
    systems and the task-control block used in
    commercial, generic executives.
  • Techniques for managing stacks and task control
    blocks are discusses.

2
Process Stack Management
  • In a multitasking system, context for each task
    needs to be saved and restored in order to switch
    process. This can be done by using one or more
    run-time stacks or the task-control block model.
  • Run-time stacks work best for interrupt-only
    system and foreground/background systems.
  • Task-control block model works best with
    full-featured real-time operating systems.

3
Task-control Block Model
  • In the task-control block model, a list of
    task-control blocks is kept. This list can be
    either fixed or dynamic.
  • In the fixed case
  • N task-control blocks are allocated at system
    generation time, all in the dormant state.
  • As task are created, the task-control block
    enters the ready state.
  • Prioritization or time slicing will then move the
    task to the execute state.
  • If a task is to be deleted, its task
    control-block is simply placed in the dormant
    state.
  • No real-time memory management is necessary.

4
Task-control Block Model (contd.)
  • In the dynamic case
  • Task-control blocks are added to a linked list
    or some other dynamic data structure as tasks are
    created.
  • The tasks are in the suspended state upon
    creation and enter the ready state via an
    operating system call or event.
  • The tasks enter the execute state owing to
    priority or time-slicing.
  • When a task is deleted, its task-control block is
    removed from the linked list, and its memory
    allocation is returned to the unoccupied or
    available status.

5
Managing the Stack
  • If a run-time stack is used, two simple routines-
    save and restore- are necessary for saving
    and restoring of context.
  • The save routine is called by an interrupt
    handler to save the current context of the
    machine into a stack area. This call should be
    made immediately after interrupts have been
    disabled to prevent disaster.
  • The restore routine should be called just before
    interrupts are enabled and before returning from
    the interrupt handler.

6
Managing the Stack (contd.)
  • A run-time stack cannot be used in a round-robin
    system because of the first-in/first-out nature
    of the scheduling. In this case a ring-buffer or
    circular queue can be used to save context. The
    context is saved to the tail of the list and
    restored from the head.
  • The maximum amount of space needed for the
    run-time stack needs to be known a priori.
    Ideally, provision for at least one more task
    than anticipated should be allocated to the stack
    to allow for spurious interrupts and time
    overloading.

7
Multiple Stack Arrangements
  • Often a single run-time stack is inadequate to
    manage several processes. A multiple stack scheme
    uses a single run-time stack and several
    application stacks. Using multiple stacks in
    embedded real-time systems has several
    advantages.
  • It permits tasks to interrupt themselves, thus
    allowing for handling transient overload
    conditions or for detecting spurious interrupts.
  • Supports reentrancy and recursion.

8
Dynamic Allocation
  • Swapping
  • The simplest scheme that allows the operating
    system to allocate memory to two processes
    simultaneously is swapping.
  • The operating system is always memory resident,
    and one processes can co-reside in the memory
    space not required by the OS, called the user
    space.
  • When a second process needs to run, the first
    process is suspended and then swapped, along with
    its context, to a secondary storage device,
    usually a disk. The second process is then loaded
    into the user space and initiated by the
    dispatcher.

9
Dynamic Allocation (contd.)
  • Overlays
  • A technique that allows a single program to be
    larger than the allowable user space is called
    overlaying.
  • The program is broken up into dependent code and
    data sections called overlays, which can fit into
    available memory.
  • This technique has negative real-time
    implications because the overlays must be swapped
    from secondary storage devices.

10
Dynamic Allocation (contd.)
  • MFT (Multiprogramming with Fixed number of
    Tasks)
  • Allows more than one process to be
    memory-resident at any one time by dividing the
    user space into a number of fixed-size
    partitions.
  • This scheme is useful where the number of tasks
    to be executed is known and fixed.
  • Partition swapping can occur when a task is
    preempted.
  • Tasks must reside in contiguous partitions.

11
Dynamic Allocation (contd.)
  • MFT
  • External fragmentation of memory can occur. This
    type of fragmentation causes problems when memory
    requests cannot be satisfied because a contiguous
    block of size requested does not exist, even
    though the actual memory is available.
  • Internal fragmentation occurs when, for example,
    a process requires 1MB of memory when only 2MB
    partitions are available.
  • Both internal and external fragmentation hamper
    efficient memory usage and ultimately degrade
    real-time performance because of the overhead
    associated with their correction.

12
Dynamic Allocation (contd.)
  • MVT (Multiprogramming with a variable number of
    tasks)
  • Memory is allocated in amounts that are not
    fixed, but rather are determined by the
    requirements of the process.
  • More appropriate when the number of real-time
    tasks is unknown or varies.
  • Memory utilization is better than MFT because
    little or no internal fragmentation can occur.

13
Dynamic Allocation (contd.)
  • MVT
  • External fragmentation occurs because of the
    dynamic nature of memory allocation and
    de-allocation, and because memory must be
    allocated to a process contiguously.
  • In MVT, however, external fragmentation can be
    mitigated by a process of compressing fragmented
    memory so that it is no longer fragmented. This
    technique is called compaction.
  • Compaction is a CPU intensive process and is not
    recommended in hard real-time systems.

14
Dynamic Allocation (contd.)
  • MVT
  • If compaction must be performed, it should be
    done in the background, and it is imperative that
    interrupts be disabled while memory is being
    shuffled.
  • MVT is useful when the number of real-time tasks
    is unknown or can vary.
  • Context-switching overhead is much higher than
    MFT, and thus it is not always appropriate for
    embedded real-time systems.

15
Dynamic Allocation (contd.)
  • Demand Paging
  • Program segments are permitted to be loaded in
    noncontiguous memory as they are requested in
    fixed-size chunks called pages or page frames.
  • This scheme helps to eliminate external
    fragmentation.
  • Program code that is not held in main memory is
    swapped to secondary storage.
  • When a memory reference is made to a location
    within a page not loaded in main memory, a page
    fault exception is raised. The interrupt handler
    for this exception checks for a free page slot in
    memory. If none is found, a page frame must be
    selected and swapped to disk a process called
    page stealing.

16
Dynamic Allocation (contd.)
  • Demand Paging
  • Paging is advantageous because it allows
    nonconsecutive references to pages via a page
    table.
  • Paging can lead to problems including high paging
    activity called thrashing, internal
    fragmentation.

17
Replacement Algorithms for page swapping
  • First-in/first-out (FIFO) is the easiest to
    implement, and its overhead is only the
    recording of the loading sequence of the pages.
  • The best non-predictive algorithm is the least
    recently used (LRU) algorithm. The LRU method
    simply states that the least recently used page
    will be swapped out if a page fault occurs.
  • The overhead for the LRU scheme rests in
    recording the access sequence to all pages, which
    can be quit substantial.
Write a Comment
User Comments (0)
About PowerShow.com