Title: Operating%20Systems
1Operating Systems
- ????
- ???
- hchgao_at_xidian.edu.cn
2Contents
- 1. Introduction
- 2. Processes and Threads
- 3. Deadlocks
- 4. Memory Management
- 5. Input/Output
- 6. File Systems
- 8. Multiple Processor Systems
- 9. Security
3Chapter 4 Memory Management
- 4.1 Basic memory management
- 4.2 Swapping (??)
- 4.3 Virtual memory (????)
- 4.4 Page replacement algorithms
- 4.5 Design issues for paging systems
- 4.6 Implementation issues
- 4.7 Segmentation (??)
4Memory Management
- Ideally programmers want memory that is
- large
- fast
- non volatile
- Memory hierarchy (??)
- small amount of fast, expensive memory cache
- some medium-speed, medium price main memory
- gigabytes of slow, cheap disk storage
- Memory manager handles the memory hierarchy
5Basic Memory Management
- Memory Management Systems can be divided into two
classes - That move processes back and forth between main
memory and disk during execution (swapping and
paging). - That do not.
- Program expand to fill the memory available to
hold them
6Monoprogramming without Swapping or Paging
- Three simple ways of organizing memory
- an operating system with one user process
- (a) Formally used on mainframes and minicomputers
but is really used any more. - (b) used on some palmtop computers and embedded
systems. - (c) used by early PC (MS-DOS), where the portion
of the system in the ROM is called the BIOS.
7Multiprogramming with Fixed Partitions
- Fixed memory partitions
- separate input queues for each partition
- single input queue
8Modeling Multiprogramming
Degree of multiprogramming
- CPU utilization as a function of number of
processes in memory
9Modeling Multiprogramming
- Suppose a computer has 32MB of memory, with the
OS taking up 16MB and each user program taking up
4MB. - These sizes allow 4 programs to be in memory at
once. With an 80 average I/O wait, we have a CPU
utilization of 1-0.8460. - Adding another 16MB of memory allow 8 programs,
thus raising the CPU utilization to 83. - Adding yet another 16MB of memory allow 12
programs, only increase CPU utilization to 93. - 97
-
10Analysis of Multiprogramming System Performance
- (a) Arrival and work requirements of 4 jobs
- (b) CPU utilization for 1 4 jobs with 80 I/O
wait - (c) Sequence of events as jobs arrive and finish
- note numbers show amount of CPU time jobs get in
each interval
11Relocation and Protection
- Cannot be sure where program will be loaded in
memory - address locations of variables, code routines
cannot be absolute - must keep a program out of other processes
partitions - Use base and limit values
- address locations added to base value to map to
physical addr - address locations larger than limit value is an
error
12Base and Limit register
13Chapter 4 Memory Management
- 4.1 Basic memory management
- 4.2 Swapping (??)
- 4.3 Virtual memory (????)
- 4.4 Page replacement algorithms
- 4.5 Design issues for paging systems
- 4.6 Implementation issues
- 4.7 Segmentation (??)
14Swapping
- Sometimes there is not enough main memory to hold
all the currently active processes. - Two general approaches to memory management
- Swapping, bringing in each process in its
entirety, running it for a while, then putting it
back on the disk. - Virtual memory, allow programs to run even when
they are only partially in main memory.
14
15Swapping (1)
- Memory allocation changes as
- processes come into memory
- leave memory
- Shaded regions are unused memory. Addresses A
must be relocated since it is now at a different
location. - Hole, memory compaction.
- Difference between fix and the variable
partitions.
16Swapping (2)
- Allocating space for growing data segment
- Allocating space for growing stack data segment
17Memory Management with Bit Maps
- (a) Part of memory with 5 processes, 3 holes
- tick marks show allocation units
- shaded regions are free
- (b) Corresponding bit map (Shortcoming searching
a bitmap for a run of a given length is a slow
operation). - (c) Same information as a list.
18Memory Management with Linked Lists
- Four neighbor combinations for the terminating
process X
19Memory Management with Linked Lists
- Several algorithms can be used to allocate memory
for a newly created process - First fit.
- Next fit.
- Best fit.
- Worst fit.
- Quick fit.
19
20Operating Systems
21Chapter 4 Memory Management
- 4.1 Basic memory management
- 4.2 Swapping (??)
- 4.3 Virtual memory (????)
- 4.4 Page replacement algorithms
- 4.5 Design issues for paging systems
- 4.6 Implementation issues
- 4.7 Segmentation (??)
22Virtual Memory
- The basic idea behind virtual memory is that the
combined size of the program, data, and stack may
exceed the amount of physical memory available
for it. - The operating system keeps those parts of the
program currently in use in main memory, and the
rest on the disk.
23Paging (1)
MMU maps the virtual addresses onto the physical
memory addresses
- The position and function of the MMU
24Paging (2)
- The virtual address space is divided up into
units called pages. - The corresponding units in the physical memory
are called page frames. - MOVE REG, 0
- ? MOVE REG, 8192
- MOVE REG, 8192
- ? MOVE REG, 24576
- MOVE REG, 20500 (2048020)
- ? MOVE REG, 12308
- MOVE REG, 32780
- ? page fault
25Address Translation Architecture
26Page Tables (1)
- Internal operation of MMU with 16 4KB pages
- MOVE REG, 8196 ? MOVE REG, 24580
27Page Tables (2)
- Purpose of the page table is to map virtual pages
onto page frame. - Two major issues must be faced
- The page table can be extremely large.
(2322204k) - The mapping must be fast.
- The simplest design is to have a single page
table consisting of an array of fast hardware
registers, with one entry for each virtual page,
indexed by virtual page number. - Advantage it is straightforward and requires no
memory references during mapping. - Disadvantage it is potentially expensive, and
hurts performance. - Another extreme, the page table can be entirely
in main memory. All the hardware needs then is a
single register that points to the start of the
page table. - Disadvantage requiring one or more memory
references to read page table entries during the
execution of each instruction.
28Multilevel Page Tables
Second-level page tables
- 32 bit address with 2 page table fields
- Two-level page tables (2324G, 4M, 4K)
- 0x00403004 PT11, PT23, Offset4
- PT11 ? 4M8M
- PT23 ? 12K16K (4M) ? 42065924210687
29Page Tables (4)
- Typical page table entry
- Page frame number most important field, the goal
of the page mapping - Present /absent if this bit is 1, the entry is
valid and can be used - Protection tell what kinds of access are
permitted, R/W/X. - Modified useable in reclaiming a page frame. If
dirty, write back to disk. - Referenced the bit is set whenever a page is
referenced. Help the OS choose a page to evict
when a page fault occurs. - Caching disabled useful for pages that map onto
device register rather than memory.
30Page Tables (exercise)
- In the paged memory management system, the
address is composed of page number and the offset
within the page. In the address structure showed
in the following figure,
. - A.page size is 1K, 64 pages at most
- B.page size is 2K, 32 pages at most
- C.page size is 4K, 16 pages at most
- D.page size is 8K, 8 pages at most
31Translation Lookaside Buffers
- TLB, ???????
- Most programs tend to make a large number of
references to a small number of pages. - Solution equip computers with a small hardware
device for mapping virtual addresses to physical
addresses without going through the page table.
The device called TLB.
32TLBs
33Inverted Page Tables
For 64-bit computers, things change. There
is one entry per page frame in real memory,
rather than one entry per page of virtual address
space.
- Comparison of a traditional page table with an
inverted page table - Downside virtual-to-physical translation becomes
much harder. - Solution TLB, Hash table.
34Chapter 4 Memory Management
- 4.1 Basic memory management
- 4.2 Swapping (??)
- 4.3 Virtual memory (????)
- 4.4 Page replacement algorithms
- 4.5 Design issues for paging systems
- 4.6 Implementation issues
- 4.7 Segmentation (??)
35Page Replacement Algorithms ??????
- Page fault forces choice
- which page must be removed
- make room for incoming page
- Modified page must first be saved
- unmodified just overwritten
- Better not to choose an often used page
- will probably need to be brought back in soon
- page replacement occurs in other areas of
computer design as well - Memory cache
- Web server
36Optimal Page Replacement Algorithm
- Replace page needed at the farthest point in
future - Optimal but unrealizable
- Estimate by
- logging page use on previous runs of process
- although this is impractical
37Not Recently Used (NRU, ?????)
- Each page has Reference bit, Modified bit
- bits are set when page is referenced, modified
- Pages are classified
- 0. not referenced, not modified
- 1. not referenced, modified
- 2. referenced, not modified
- 3. referenced, modified
- NRU removes page at random from lowest numbered
non empty class
38FIFO Page Replacement Algorithm
- Maintain a linked list of all pages
- in order they came into memory
- Page at beginning of list replaced
- Disadvantage
- Page being replaced may be often used
39Second Chance Page Replacement Algorithm
- Operation of a second chance
- pages sorted in FIFO order
- Looking for an old page that has not been
referenced in the previous clock interval
40The Clock Page Replacement Algorithm
41Operating Systems
42Least Recently Used (LRU, ??????)
- Assume pages used recently will used again soon
- throw out page that has been unused for longest
time - Must keep a linked list of pages
- most recently used at front, least at rear
- update this list every memory reference !!
- Alternatively keep counter in each page table
entry - choose page with lowest value counter
- periodically zero the counter
43Simulating LRU in Hardware
- LRU using a matrix pages referenced in order
- 0, 1, 2, 3, 2, 1, 0, 3, 2, 3
- LRU 3, 3, 3, 0, 0, 0, 3, 2, 1, 1
44Simulating LRU in Software
- The aging algorithm simulates LRU in software
- Note 6 pages for 5 clock ticks, (a) (e)
45The Working Set
- Demand paging (????) pages are loaded only on
demand, not in advance. - Locality of reference during any phase of
execution, the process references only a
relatively small fraction of its pages. - Working set the set of pages that a process is
currently using. - What to do when a process is brought back in
again? - Working set model Many paging system keep track
of each process working set and make sure that
it is in memory before letting the process run.
46The Working Set Page Replacement Algorithm (1)
k
- The working set is the set of pages used by the k
most recent memory references - w(k,t) is the size of the working set at time, t
47The Working Set Page Replacement Algorithm (2)
- The working set algorithm
48The WSClock Page Replacement Algorithm
- Operation of the WSClock algorithm
49Review of Page Replacement Algorithms
50Chapter 4 Memory Management
- 4.1 Basic memory management
- 4.2 Swapping (??)
- 4.3 Virtual memory (????)
- 4.4 Page replacement algorithms
- 4.5 Design issues for paging systems
- 4.6 Implementation issues
- 4.7 Segmentation (??)
51Local vs Global Allocation Policies
How memory should be allocated among the
competing runnable processes?
- (a) Original configuration
- (b) Local page replacement
- (c) Global page replacement
52Local vs Global Allocation Policies (2)
- Another approach
- allocate page frames to processes.
- a. periodically determine the number of running
processes and allocate each process an equal
share. - b. pages can be allocated in proportion to each
process total size.
53Local vs Global Allocation Policies (3)
- Page fault rate as a function of the number of
page frames assigned
54Operating Systems
55Load Control
- Despite good designs, system may still thrash (a
program causing page faults every few
instructions) - When PFF (page fault frequency ??????) algorithm
indicates - some processes need more memory
- but no processes need less
- Solution Reduce number of processes competing
for memory - swap one or more to disk, divide up pages they
held - reconsider degree of multiprogramming (CPU bound
or I/O bound)
56Page Size
- Argue for small page size (typically 4k or 8k).
- Advantages
- less internal fragmentation
- better fit for various data structures, code
sections - less unused program in memory
- Disadvantages
- programs need many pages, hence a larger page
tables - transfer a small page takes almost as much time
as transferring a large page
57Separate Instruction and Data Spaces
- One address space
- Separate I and D spaces
58Shared Pages
- Two processes sharing the same program sharing
its page table (not all pages are sharable)
59Cleaning Policy
- Need for a background process, paging daemon
(??????) - periodically inspects state of memory, to insure
a plentiful supply of free page frames. - When too few frames are free
- selects pages to evict using a replacement
algorithm - It can use same circular list (clock)
60Chapter 4 Memory Management
- 4.1 Basic memory management
- 4.2 Swapping (??)
- 4.3 Virtual memory (????)
- 4.4 Page replacement algorithms
- 4.5 Design issues for paging systems
- 4.6 Implementation issues
- 4.7 Segmentation (??)
61Operating System Involvement with Paging
- Four times when OS involved with paging
- Process creation time
- determine program size
- create page table
- space has to be allocated in memory for the page
table and it has to be initialized - Process execution time
- MMU reset for new process
- TLB flushed
- Page fault time
- determine virtual address causing fault
- swap target page out, needed page in
- Process termination time
- release page table, pages, and the disk space
that the pages occupy when they are on disk.
62Page Fault Handling (in detail)
- Hardware traps to kernel, saving the program
counter on the stack. - General registers saved
- OS determines which virtual page needed
- OS checks validity of address, seeks free page
frame - If selected frame is dirty, write it to disk
- OS schedules new page in from disk
- Page tables updated
- Faulting instruction backed up to when it began
- Faulting process scheduled
- Registers restored, Program continues
63Backing Store
- (a) Paging to static swap area (always a shadow
copy on disk) - (b) Backing up pages dynamically (no copy on disk)
64Chapter 4 Memory Management
- 4.1 Basic memory management
- 4.2 Swapping (??)
- 4.3 Virtual memory (????)
- 4.4 Page replacement algorithms
- 4.5 Design issues for paging systems
- 4.6 Implementation issues
- 4.7 Segmentation (??)
65Segmentation ??
A compiler has many tables that are built up as
compilation proceeds
- One-dimensional address space with growing tables
- One table may bump into another
66Segmentation (2)
Segmentation many completely independent address
spaces.
- Allows each table to grow or shrink, independently
67Segmentation (3)
- Advantages
- Simplify the handling of data structures that are
growing or shrinking - The linking up of procedures compiled separately
is greatly simplified - Facilitates sharing procedures or data between
several processes - Different segments can have different kinds of
protection
68Segmentation (4)
- Comparison of paging and segmentation
69Implementation of Pure Segmentation
- (a)-(d) Development of checkerboarding
- (e) Removal of the checkerboarding by compaction
70Segmentation with Paging
- If the segments are large, it may be
inconvenient, or even impossible, to keep them in
main memory in their entirety. - MULTICS, combine the advantage of paging (uniform
page size and not having to keep the whole
segment in memory if only part of it is being
used) with the advantages of segments (ease of
programming, modularity, protection, and sharing).
71Segmentation with Paging MULTICS (1)
- Descriptor segment points to page tables
- Segment descriptor numbers are field lengths
72Segmentation with Paging MULTICS (2)
- A 34-bit MULTICS virtual address (consists of two
parts the segment and the address within the
segment. The address within the segment is
further divided into a page number and a word
within the page) - the segment number is used to find the segment
descriptor. - Check if the segments page table is in memory.
Located it if YES, fault occurs if NO. - Examine if the page table entry in memory.
Extract it if YES, page fault occurs if NO. - Add offset to give the real memory address.
- Read or store finally takes places.
73Segmentation with Paging MULTICS (3)
- Conversion of a 2-part MULTICS address into a
main memory address
74Segmentation with Paging MULTICS (4)
- Simplified version of the MULTICS TLB
- Existence of 2 page sizes makes actual TLB more
complicated
75(No Transcript)
76Segmentation with Paging
- MULTICS has 256K independent segments, each up to
64K 36-bit words. - Intel Pentium has 16K independent segments, each
up to 1 billion 32-bit words. - Few programs need more than 1000 segments, but
many programs need large segments. - The heart of the Pentium virtual memory consists
of two tables, the LDT (Local Descriptor Table
??????) and the GDT (Global Descriptor Table
??????). - Each program has its own LDT, but there is a
single GDT, shared by all the programs. - LDT describes segments local to each program,
including its code, data, stack, and so on. GDT
describes system segments, including the OS
itself.
77Segmentation with Paging Pentium (1)
78Segmentation with Paging Pentium (2)
- Pentium code segment descriptor
- Data segments differ slightly
79Segmentation with Paging Pentium (3)
- Conversion of a (selector, offset) pair to a
linear address
80Segmentation with Paging Pentium (4)
- Mapping of a linear address onto a physical
address
81Segmentation with Paging Pentium (5)
Level
- Protection on the Pentium
82Homework
- P250, No.14
- P251, No.24
- P251, No.28