Unix does typical memory management - PowerPoint PPT Presentation

1 / 16
About This Presentation
Title:

Unix does typical memory management

Description:

Data and code reside on the disk, must be staged into the CPU ... INITIALIZED GLOBALS, STATICS. UNINITIALIZED GLOBALS, STATICS. SHARED LIB TEXT. SHARED LIB DATA ... – PowerPoint PPT presentation

Number of Views:149
Avg rating:3.0/5.0
Slides: 17
Provided by: ole4
Category:

less

Transcript and Presenter's Notes

Title: Unix does typical memory management


1
Introduction
  • Unix does typical memory management
  • This lecture
  • Brief revision
  • Relate the memory model to process structure
  • Look at monitoring tool vmstat
  • Most of our description relates to what C
    programs see

2
Revision Memory Hierarchy
Smallest, fastest
Largest, slowest
  • Data and code reside on the disk, must be staged
    into the CPU
  • One task of the operating system manage the
    staging process so as to keep the CPU as busy as
    possible

3
Revision Virtual Memory
  • For each process
  • Only the current working set is loaded
  • A full copy of the program exists on a swap disk
  • Pages are read in for or stolen from a process as
    required
  • Programmer sees a very large memory space, but
    only a fraction of the program is ever resident
  • CPU issues virtual addresses which get mapped to
    real addresses
  • Translation is usually fast, but may take several
    levels of table lookup
  • Sometimes generate a page fault

4
Revision Paging
  • System keeps a free list of available pages
  • Pages are added to the free list
  • When processes finish
  • If the free list gets too small, the system
    steals pages
  • Pages are allocated to processes
  • At startup (described later)
  • When the process generates a page fault
  • Pages must usually be loaded from disk
  • Sometimes the requested item is still on the free
    list
  • So is re-attached (called a reclaim, saves time
    and resources)

5
Revision Swapping
  • Idle processes are swapped out
  • Swapped in when usage resumes
  • The working set may be trimmed before swapping
  • The swap image may not contain all the program
  • Amount of idle time before swapout is selectable
  • Too low, and theres too much swapping
  • Too high, and memory use is less efficient
  • Historically, swapping was the only form of
    memory management

6
Revision Segmented Virtual Memory
  • Process space is divided into segments
  • May be read-write or read-only
  • Unix uses paged segmented virtual memory
  • A page table for each segment
  • Segmenting gives better control of execution
  • Benefits Read-only segments mean less pages
    written out
  • Much harder for program to be corrupted
  • Costs slower address translations, larger
    minimum size

7
What Unix Does
  • Unix allocates main memory four ways
  • Kernel
  • Process images
  • I/O buffers
  • Free list
  • The allocations vary as the system load changes

8
A Complication Sharing
  • If two or more users are running the same program
    (e.g., vi, csh), the code (or text) for the
    program is shared
  • But each user has her own data areas
  • Also, Linux and Solaris use dynamic linking
  • Library functions are only loaded when called
  • If two programs call the same library functions,
    the code is shared
  • These features reduce load on main memory and
    disk
  • But complicate process structure
  • Process runs slightly slower

9
Copy on Write
  • Pages that may be modified can be shared
  • Until one of the processes writes to the page
  • A copy is then made for the modifying process
  • Copy on write reduces the total number of pages
    required at any time

10
Process Virtual Memory Structure
STACK (CAN GROW)
SPACE FOR HEAP (IF REQUIRED)
SEGMENTS
SHARED LIB DATA
MAY BE SEVERAL OF THESE
SHARED LIB TEXT
HOLE
UNINITIALIZED GLOBALS, STATICS
INITIALIZED GLOBALS, STATICS
BASE TEXT
USER AREA (KERNEL DATA)
11
Fork
  • fork creates a new process running the same
    program
  • The new image is almost identical to the parent
    one
  • In fact, fork
  • Creates a new set of kernel data structures for
    the new process
  • Creates new page and segment maps
  • Shares pages of the program image
  • Modifiable pages are marked copy on write
  • Since fork is normally followed by exec, this
    approach minimises new page creation

12
exec
  • exec starts a new program in an existing process
  • The kernel data structures are mostly unchanged
  • The new program is set up and new page and
    segment tables created
  • The new program starts running with existing file
    descriptors, etc
  • Only the initial page of code may actually be in
    memory
  • A series of page faults then builds the programs
    initial working set

13
Kernel Memory Management
  • Most kernel code is permanently resident in
    memory
  • (How could the page handler page itself back into
    memory?)
  • The kernel sets up and destroys very many small
    data structures
  • Pages and segments are too big and clumsy
  • A piece of real memory is set aside, and small
    pieces allocated as required
  • The total amount allocated can be changed as
    necessary
  • Mechanism is fast and efficient

14
Memory and I/O
  • Some of main memory is used as a cache for file
    I/O
  • File blocks are read into buffers in memory,
    characters or lines are passed to a program as
    requested
  • I/O system usually reads ahead to speed up
    processing
  • Writes are usually delayed
  • Buffer blocks are left unchanged as long as
    possible
  • Another process may request the data
  • Terminal I/O is usually handled by a different
    process
  • Uses smaller buffers

15
Monitoring Memory
  • Performance analysis tools show memory usage
  • vmstat shows virtual memory operations
  • Plus disk and cpu usage
  • Useful to display computer status on-screen
  • sar records memory usage (and many other things)
    for later analysis
  • For details on meanings of fields, see the manual
    pages

16
Assessment
  • Memory management is a classical example of one
    trend in computing
  • If it saves some resources, it doesnt matter how
    complex it is
  • The basic ideas of process and I/O become much
    more complicated
  • Allows the computer to do more work
  • The complexity is hidden from the casual user
  • Main memory is treated as a scarce resource
  • Even though its not as scarce as it used to be
Write a Comment
User Comments (0)
About PowerShow.com