Title: STORAGE MANAGEMENT
1STORAGE MANAGEMENT
2Overview
- Background
- Demand Paging
- Process Creation
- Page Replacement
- Allocation of Frames
- Thrashing
- Operating System Examples
- Summary
3Background
- Virtual memory is the separation of user logical
memory from physical memory - Only part of the program needs to be in memory
for execution. - Logical address space can therefore be much
larger than physical address space. - Allows address spaces to be shared by several
processes. - Allows for more efficient process creation
- Virtual memory can be implemented via
- Demand paging
- Demand segmentation
4Virtual Memory That is Larger Than Physical Memory
5Demand Paging
- Similar to a paging system with swapping
- Processes residing on secondary storage are
swapped into memory to execute - Rather than swapping an entire process a page is
swapped in when needed
6Transfer of a Paged Memory to Contiguous Disk
Space
7Basic Concepts
- Pager brings a page into memory only when it is
needed - Less I/O needed
- Less memory needed
- Faster response
- More users
- Need hardware support to determine which pages
are in memory and which are on the disk - The valid-invalid bit scheme is used for this
purpose - Valid indicates the page is both legal and in
memory - Invalid indicates the page is invalid or the page
is not currently in memory
8Valid-Invalid Bit
- With each page table entry a validinvalid bit is
associated(1 ? in-memory, 0 ? not-in-memory) - Initially validinvalid but is set to 0 on all
entries. - During address translation, if validinvalid bit
in page table entry is 0 ? page fault
- Example Page Table Snapshot
9Page Table When Some Pages Are Not in Main Memory
10Page Fault
- Access to a page marked invalid causes a
page-fault trap - Hardware will trap to the OS
- Procedure for handling page faults
- Check an internal table to determine if the
reference was valid or invalid - If the reference was invalid terminate the
process if valid page it in - Find a free frame and schedule a disk operation
to read in the desired page to the free frame - Modify the internal table to indicate the page is
now in memory - Restart the instruction that was interrupted by
the illegal address trap
11Steps in Handling a Page Fault
12Pure Demand Paging
- Never bring a page into memory until it is
required - Process starts executing with no pages in memory
- When the OS executes the first instruction the
process faults for the page - After this page is brought into memory, the
process continues to execute, faulting as
necessary until every page is in memory - Hardware to support demand paging
- Page Table stores the valid-invalid bit
- Secondary Memory Hold the pages that are not
present in main memory
13Performance of Demand Paging
- Demand paging can have a significant effect on
performance (Effective Access Time) - No page faults, access time is equal to memory
access time if a page faults exists then that
page must be read from disk - Page Fault Rate 0 ? p ? 1.0
- if p 0 no page faults
- if p 1, every reference is a fault
- Effective Access Time (EAT)
- EAT (1 p) x memory access
- p (page fault overhead
- swap page out
- swap page in
- restart overhead)
14Demand Paging Example
- Servicing the page faults and restarting the
process could take 1 to 100 usec - The page-switch time could be close to 24 msec
- Average latency of 8 msec, seek time of 15 msec,
transfer time of 1 msec - Take an average page-fault time of 25 msec and a
memory access time of 100 nsec - EAT ( 1 p) x (100) p (25 msec)
- (1 p) x 100 p x 25,000,000
- 100 24,999,900 x p
- The EAT is directional proportional to the
page-fault rate - If one access out of 1,000 causes a page fault,
the EAT is 25 usec, the computer would be slowed
down by a factor of 250 because of demand paging - Want degradation lt 10
15Process Creation
- A process can start quickly by demand paging in
the page containing the first instruction - Paging and virtual memory can provide other
benefits during process creation - Copy-on-Write
- Memory-Mapped Files
16Copy-on-Write
- Copy-on-Write (COW) allows both parent and child
processes to initially share the same pages in
memory - If either process modifies a shared page, only
then is the page copied - COW allows more efficient process creation as
only modified pages are copied - Free pages are allocated from a pool of
zeroed-out pages - The OS typically allocates these pages using a
technique known as zero-fill-on-demand
17Memory-Mapped Files
- Memory-mapped file I/O allows file I/O to be
treated as routine memory access by mapping a
disk block to a page in memory. - A file is initially read using demand paging. A
page-sized portion of the file is read from the
file system into a physical page. Subsequent
reads/writes to/from the file are treated as
ordinary memory accesses. - Simplifies file access by treating file I/O
through memory rather than read() write() system
calls. - Also allows several processes to map the same
file allowing the pages in memory to be shared
18Memory Mapped Files
19Page Replacement
- Memory over-allocation
- While a user process is executing, a page fault
occurs - The OS finds the page on disk, but discovers that
there are no free frames on the free-frame list - The operating system has several options
- Terminate the user process
- Swap out the process, freeing all the frames
- Page replacement
- Prevent over-allocation of memory by modifying
page-fault service routine to include page
replacement
20Need For Page Replacement
21Basic Page Replacement
- If no frame is free, find one that is not
currently being used and free it - Write its contents to swap space and change the
page table to indicate the page is no longer in
memory - Use the freed frame to hold the page of the
process that faulted - Modify the page-fault service routine
- Find the location of the desired page on disk
- Find a free frame - If there is a free frame,
use it. - If there is no free frame, use a page
replacement algorithm to select a victim frame - Read the desired page into the (newly) free
frame. Update the page and frame tables - Restart the process
22Page Replacement
23Page Replacement Algorithms
- Use modify (dirty) bit to reduce overhead of page
transfers only modified pages are written to
disk. - Page replacement completes separation between
logical memory and physical memory large
virtual memory can be provided on a smaller
physical memory - Want lowest page-fault rate.
- Evaluate algorithm by running it on a particular
string of memory references (reference string)
and computing the number of page faults on that
string. - Generate reference strings artificially (random
number generator) or trace a given system and
record the address of each memory reference
24Graph of Page Faults Versus The Number of Frames
25First-In-First-Out (FIFO) Algorithm
- A FIFO replacement algorithm associates with each
page the time the page was brought into memory,
the oldest pages are replaced - The FIFO page-replacement algorithm is easy to
understand and program - FIFO page-replacement disadvantages
- Performance is not always good
- A bad replacement choice increases the page-fault
rate and slows process execution - Beladys anomaly
26FIFO Page Replacement
- Reference (7,0,1) causes a page fault
- Reference (2) replaces page 7
- Reference (0) already in memory no page fault
- Reference (3) replaces page 0
- The process continues on 15 faults altogether
27Beladys Anamoly
28Optimal Page Replacement
- The lowest page-fault rate of all algorithms
never suffers from Beladys anomaly - The algorithm is simply
- Replace the page that will not be used for the
longest period of time - Guarantees the lowest possible page-fault rate
for a fixed number of frames
29Optimal Page Replacement
- Reference to (2) replaces 7 because 7 will not be
used until reference 18, whereas page 0 will be
used at 5and page1 at 14 - Reference to (3) replaces 1 as page 1 will be the
last of the three pages to be referenced - Have only 9 faults
30Least Recently Used (LRU) Algorithm
- Associates with each page the time the pages
last use - When a page must be replaced LRU chooses the page
that has not been used for the longest period of
time - This strategy is the optimal page-replacement
algorithm looking backward in time, rather than
forward
31LRU Page Replacement
- First 5 faults same as optimal replacement
- Reference (4) replaces page 2
- Reference (2) replaces page 3 since 0,3,4, page
3 is the least recently used
32LRU Algorithm (Cont.)
- How to determine an order for the frames defined
by the last use - Counter implementation
- Every page entry has a counter every time page
is referenced through this entry, copy the clock
into the counter. - When a page needs to be changed, look at the
counters to determine which are to change. - Stack implementation keep a stack of page
numbers in a double link form - Page referenced
- move it to the top
- requires 6 pointers to be changed
- No search for replacement
33Use Of A Stack to Record The Most Recent Page
References
34LRU Approximation Algorithms
- Reference bit
- With each page associate a bit, initially 0
- When page is referenced bit set to 1.
- Replace the one which is 0 (if one exists). We
do not know the order, however. - Second chance
- Need reference bit.
- Clock replacement.
- If page to be replaced (in clock order) has
reference bit 1. then - set reference bit 0.
- leave page in memory.
- replace next page (in clock order), subject to
same rules.
35Second-Chance (clock) Page-Replacement Algorithm
36Counting-Based Page Replacement
- Keep a counter of the number of references that
have been made to each page. - Least Frequently Used (LRU) Algorithm replaces
page with smallest count. - Most Frequently Used (MFU) Algorithm based on
the argument that the page with the smallest
count was probably just brought in and has yet to
be used.
37Allocation of Frames
- How do we allocate the fixed amount of free
memory among the various processes - Each process needs minimum number of pages.
- Example IBM 370 6 pages to handle SS MOVE
instruction - instruction is 6 bytes, might span 2 pages.
- 2 pages to handle from.
- 2 pages to handle to.
- Two major allocation schemes.
- Equal allocation
- Proportional allocation
38Equal Allocation
- Equal Allocation
- Split m frames among n processes to give everyone
an equal share, m/n frames - If there are 93 frames and 5 processes, each
process will get 18 frames
39Proportional Allocation
- Proportional Allocation
- Allocate memory to each process according to its
size
40Global vs. Local Allocation
- Global replacement process selects a
replacement frame from the set of all frames one
process can take a frame from another. - Local replacement each process selects from
only its own set of allocated frames
41Thrashing
- If a process does not have enough pages, the
page-fault rate is very high. This leads to - low CPU utilization.
- operating system thinks that it needs to increase
the degree of multiprogramming. - another process added to the system
- A process is thrashing if it is spending more
time paging than executing
42Causes of Thrashing
- If CPU utilization is too low, a new process
could be introduced into the system - Suppose an executing process needs more frames
and starts faulting and taking away frames other
processes however those processes needs those
pages so they start faulting also - As processes wait for the pages, CPU utilization
decreases - The CPU scheduler see the decreasing CPU
utilization and increases the degree of
multiprogramming as a result
43Thrashing (Cont)
- CPU utilization is plotted against the degree of
multiprogramming - As multiprogramming increases, CPU utilization
increases until a maximum is reached - If multiprogramming is increased even further,
thrashing sets in and CPU utilization drops
sharply
44Locality In A Memory-Reference Pattern
45Working-Set Model
- ? ? working-set window ? a fixed number of page
references Example 10,000 instruction - WSSi (working set of Process Pi) total number
of pages referenced in the most recent ? (varies
in time) - if ? too small will not encompass entire
locality. - if ? too large will encompass several localities.
- if ? ? ? will encompass entire program.
- D ? WSSi ? total demand frames
- if D gt m ? Thrashing
- Policy if D gt m, then suspend one of the processes
46Working-Set model
47Keeping Track of the Working Set
- Approximate with interval timer a reference bit
- Example ? 10,000
- Timer interrupts after every 5000 time units.
- Keep in memory 2 bits for each page.
- Whenever a timer interrupts copy and sets the
values of all reference bits to 0. - If one of the bits in memory 1 ? page in
working set. - Why is this not completely accurate?
- Improvement 10 bits and interrupt every 1000
time units
48Page-Fault Frequency Scheme
- Establish acceptable page-fault rate.
- If actual rate too low, process loses frame.
- If actual rate too high, process gains frame
49Other Considerations
- Selection of a paging system
- Prepaging
- Attempt to prevent high level of initial paging
- Bring into memory at one time all the pages that
will be needed - Page size selection
- fragmentation
- table size
- I/O overhead
- locality
50Other Considerations (Cont.)
- TLB Reach
- The amount of memory accessible from the TLB and
is simple the number of entries multiplied by
page size - TLB Reach (TLB Size) X (Page Size)
- Ideally, the working set of each process is
stored in the TLB. Otherwise there is a high
degree of page faults - Increasing the Size of the TLB
- Increase the Page Size. This may lead to an
increase in fragmentation as not all applications
require a large page size. - Provide Multiple Page Sizes. This allows
applications that require larger page sizes the
opportunity to use them without an increase in
fragmentation
51Other Considerations (Cont.)
- Inverted Page Table
- Reduce the amount of physical memory that is
needed to track virtual-to-physical address
translations - Create a table that has one entry per physical
page, indexed by ltprocess-id, page-numbergt - Program structure
- int A new int128128
- Each row is stored in one page
- Program 1 for (j 0 j lt A.length j) for
(i 0 i lt A.length i) Ai,j 0128 x
128 page faults - Program 2 for (i 0 i lt A.length i) for
(j 0 j lt A.length j) Ai,j 0 - 128 page faults
52Other Considerations (Cont.)
- I/O Interlock Pages must sometimes be locked
into memory. - Consider I/O. Pages that are used for copying a
file from a device must be locked from being
selected for eviction by a page replacement
algorithm