OS lecture 6 1 - PowerPoint PPT Presentation

1 / 29
About This Presentation
Title:

OS lecture 6 1

Description:

small amount of fast, expensive memory cache. some medium-speed, medium price main memory ... BIOS. OS. lecture 6 - 5. Multiprogramming with Fixed Partitions ... – PowerPoint PPT presentation

Number of Views:62
Avg rating:3.0/5.0
Slides: 30
Provided by: avisha
Category:
Tags: bios | lecture

less

Transcript and Presenter's Notes

Title: OS lecture 6 1


1
Operating Systems Lecture 6
  • Memory Management
  • Swapping, hole selection strategy
  • VM, Paging

2
Memory Management
  • Ideally programmers want memory that is
  • large
  • fast
  • non volatile (does not get erased when power goes
    off)
  • Memory hierarchy
  • small amount of fast, expensive memory cache
  • some medium-speed, medium price main memory
  • gigabytes of slow, cheap disk storage
  • Memory manager handles the memory hierarchy

3
The Memory Hierarchy
Access Time
Capacity
lt 1 KB
1 nsec
Registers
1 MB
On-chip Cache
2 nsec
64-512MB
Main Memory
10 nsec
5-50GB
Magnetic (Hard) Disk
10 msec
Magnetic Tape
20-100GB
100 sec
Other types of memory ROM, EEPROM, Flash RAM
4
Basic Memory Management
(Palm computers)
(MS-DOS)
BIOS
  • An operating system with one user process

5
Multiprogramming with Fixed Partitions
  • Separate input queues for each partition
  • Single input queue

6
Problems with Fixed Partitions
  • Separate queues memory not used efficiently if
    many process in one class and few in another
  • Single queue small processes can use up a big
    partition, again memory not used efficiently

7
Relocation and Protection
  • Cannot be sure where program will be loaded in
    memory
  • address locations of variables, code routines
    cannot be absolute
  • Relocation the mechanism for fixing memory
    references in memory
  • Protection one process should not be able to
    access another processes memory partition

8
Swapping
  • Fixed partitions are too inflexible, waste memory
  • Next step up in complexity dynamic partitions
  • Allocate as much memory as needed by each process
  • Swap processes out to disk to allow more
    multi-programming

9
Swapping - example
  • Memory allocation changes as
  • processes come into memory
  • leave memory
  • Shaded regions are unused memory

10
How much memory to allocate?
  • (a) Allocating space for growing data segment
  • (b) Allocating space for growing stack data
    segment

11
Issues in Swapping
  • When a process terminates compact memory?
  • Move all processes above the hole down in memory.
  • Can be very slow 256MB of memory, copy 4 bytes
    in 40ns ? compacting memory in 2.7 sec
  • Almost never used
  • Result OS needs to keep track of holes.
  • Problem to avoid memory fragmentation.

12
Swapping Data Structure Bit Maps
  • Part of memory with 5 processes, 3 holes
  • tick marks show allocation units
  • shaded regions are free
  • Corresponding bit map

13
Properties of Bit-Map Swapping
  • Allocation unit is k bits
  • bitmap uses M/k bits. Could be quite large.
  • E.g., allocation unit is 32 bit
  • Bit map uses 1/33 of memory
  • Searching bit-map for a hole is slow

14
Swapping Data Structure Linked Lists
  • Variation 1 keep a list of blocks (processP,
    holeH)

15
What Happens When a Process Terminates?
Merge neighboring holes to create a bigger hole
16
Hole Selection Strategy
  • We have a list of holes of sizes 10, 20,
    10, 50, 5A process that needs size 4. Which Hole
    to Use?
  • First fit pick the 1st hole thats big enough
    (use hole of size 10)
  • Break up the hole into a used piece and a hole of
    size 10 - 4 6
  • Simple and fast

17
Best Fit
  • For a process of size s, use smallest hole that
    has size(hole) gt s.
  • In example, use last hole, of size 5.
  • Problems
  • Slower (needs to search whole list)
  • Creates many tiny holes that fragment memory
  • Can be made as fast as first fit if blocks sorted
    by size (but then slower termination processing)

18
Other Options
  • Worst fit find the biggest hole that fits.
  • Simulations show that this is not very good
  • Quick Fit maintain separate lists for common
    block sizes.
  • Improved performance of find-hole operation
  • More complicated termination processing

19
Virtual Memory Main Idea
  • Processes use virtual address space (e.g.,
    0-0xFFFFFFFFF for 32-bit addresses).
  • Every process has its own address space
  • The address space can be larger than physical
    memory.

20
Memory Mapping
  • Only part of the virtual address space is mapped
    to physical memory at any time.
  • Parts of processes memory content is on disk.
  • Hardware OS collaborate to move memory contents
    to and from disk.

21
Advantages of Virtual Memory
  • No need for software relocation process code
    uses virtual addresses.
  • Solves protection requirement Impossible for a
    process to refer to another processs memory.
  • Per-process memory mapping (page table)
  • Only OS can modify the mapping

22
Hardware support the MMU(Memory Management Unit)
23
Practice
  • 16-bit memory addresses
  • Virtual address space size ___ KB
  • Physical memory ___ KB (___ bit) (assuming half
    of the virtual memory size)
  • Virtual address space split into 4KB pages.
  • ____ pages
  • Physical memory is split into 4KB page frames.
  • ____ frames

24
Paging
  • The relation between
  • virtual addresses and
  • physical memory
  • addresses given by
  • the page table
  • OS maintains table
  • Page table per process
  • MMU uses table

25
Practice (cont)
  • CPU executes the command mov rx, 5
  • MMU gets the address 5.
  • Virtual address 5 is in page ___ (addresses
    __-____)
  • Page __ is mapped to frame __ (physical addresses
    ____-_____).
  • MMU puts the address ____ (______) on the bus.

26
Page Faults
  • What if CPU issues mov rx, 32780
  • That page is un-mapped (not in any frame)
  • MMU causes a page fault (interrupt to CPU)
  • OS handles the page fault
  • Evict some page from a frame
  • Copy the requested page from disk into the frame
  • Re-execute instruction

27
How the MMU Works
  • Splits 32-bit virtual address into
  • A k-bit page number the top k MSB
  • A (32-k) bit offset
  • Uses page number as index into page table, and
    adds offset.
  • Page table has 2k pages.
  • Each page is of size 232-k.

28
4-bit page number
29
Issues with Virtual Memory
  • Page table can be very large
  • 32 bit addresses, 4KB pages (____-bit offsets)
    ? over ________ pages
  • Each process needs its own page table
  • Page lookup has to be very fast
  • Instruction in 4ns ? page table lookup should be
    around 1ns
  • Page fault rate has to be very low.
Write a Comment
User Comments (0)
About PowerShow.com