Title: Chapter 8: (RAM) Main Memory: Outline
1Chapter 8 (RAM) Main Memory Outline
- RAM one of the resources an O/S manages
- Memory addressing
- Types of addresses address binding
- Physical vs. logical address generation
- Hardware support for memory management
- MMU Memory Management Unit Memory protection
- Memory allocation
- How when do we allocate RAM memory to
processes? Includes consideration of both code
(text) and data - Dynamic loading dynamic linking swapping
contiguous memory allocation - Paging
- Segmentation
2Types of RAM Memory Addresses
- Symbolic addresses
- Can have these in program language code and in
Object Modules - Translated to relocatable or absolute addresses
through linking - Relocatable addresses
- Can be bound to specific addresses, after compile
time - By a dynamic linking process
- Addresses generated relative to part of program,
not to start of physical RAM memory (e.g., 0 is
first byte of program) - Absolute (physical) addresses
- Specific addresses in RAM, relative to 0 at start
of RAM - Eventually all program addresses must resolve to
absolute addresses
3Using nm on Unix systems
- Showing symbols in object modules
- Compile program with c option to get object
module - Or build a kernel module on a Linux system
4Address Binding
- Definition
- Converting the (relative or symbolic) address
used in a program to to an actual physical
(absolute) RAM address - Address binding time
- During compilation (or assembly)
- But, often dont know where program will be
loaded when it is compiled (or assembled) - During load time
- In order for a program to be initially loaded,
decisions must be made about where it will
execute in computer memory, so at least initial
specific addresses must be bound - This is dependent on the kind of memory
allocation strategy used - During execution (typical for general purpose OS)
- We may want to move a program, during execution,
from one region of memory to another
5Runtime Address Generation
- When are RAM addresses generated when a machine
language program runs, i.e., when are addresses
generated by the CPU, by the running program, in
order to access RAM memory? - Processes are comprised of a series of bytes of
RAM - Some of those bytes are data, some are machine
instructions (text) - Addresses are generated by PC (program counter),
indexing, indirection, literal addresses etc.
6Consider the following fragment of an assembly
language program
When are addresses generated when this program
runs?
7CPU instruction execution cycle
- Repeat forever
- Check for hardware interrupts
- Fetch instruction from memory (RAM or cache)
- Decode instructions, may include fetching
operands from memory - Store results back to memory, if needed
Address Generation
Address space The set of addresses generated by
a running program
8Memory Management Hardware
Address generation
- CPU generates a logical address (a kind of
relocatable address) - MMU translates logical address to physical
address (an absolute address), and puts this in
MAR - Physical address used to access RAM (i.e., load
or store value at the physical address)
9MMU Memory Management Unit
- Translates logical addresses to physical
addresses - I.e., performs address binding
- Also enables memory protection
- General memory protection problem
- Check each address generated by program running
on CPU to see if it is within address space of
process - If not, then generate an error (e.g., page fault
or terminate the process) - MMU implemented in hardware
- Memory protection generally requires support from
hardware - Why not software?
10MMU Some Hardware Possibilities
- Relocation register
- Base limit register
- Page table support
- Segmentation support
11Relocation Register
Can this provide memory protection?
Figure 8.4 Dynamic relocation using a relocation
register defines the start of an address space
12Base Limit Registers (Modified Fig. 8.2, p.
317)
MMU
Provides memory protection, but no relocation
13A base and a limit register can define a logical
address space (Fig 8.1)
- Notice that this kind of MMU support requires
that a process be allocated to a contiguous
series of physical memory addresses
14Main Issues in Memory Management
- How do we represent address space of a process?
- logically, the address space is a contiguous
series of addresses (i.e., text and data of
process) - in physical RAM, we can either (a) map entire
address space to physically contiguous addresses
or (b) break into smaller pieces in some manner - When and where do we load or unload address space
from disk into/from RAM? - if physically contiguous address space, load all
or none - if program loaded in smaller pieces, we have more
options
15Contiguous Memory Allocation
- O/S kernel divides user memory up into relatively
large partitions - Keeps a table recording these partitions
- One process per partition
- Partition methods
- Fixed size partitions
- When we have a single partition per process,
there is a static (fixed) limit on degree of
multiprogramming - Variable size partitions
16Contiguous Memory Allocation Variable Size
Partitions
- This is a kind of dynamic memory allocation
- O/S kernel table describing partitions address
of start, and of end of allocated partitions, and
unallocated partitions - Unallocated partitions are holes
- When memory returned, merge with neighboring hole
- Allocation of a new partition of size n
- First fit, Best fit, Worst fit
- First fit may be better faster, similar to best
fit - Fragmentation
- External
- Free blocks of memory too small to be used
- Compaction can be used need relocatable code
- Internal
- Allocating partitions larger than requested
unused space in partition for a process (more
typical with fixed sized partitions)
17What do we do when we run out of memory?
- E.g., all memory partitions of operating system
available for user processes are full - Options
- System.exit
- Something else
18Swapping is one solution(Fig. 8.5)
19Loading a Program In Parts Some Possibilities
- 1) Dont load libraries along with program,
instead, load as needed - 2) Dynamic loading
- Program component (e.g., method, subroutine) not
loaded until called - Calling program checks to see if a component
loaded yet - If not, calls the loader
- 3) Dynamic linking
- Libraries are loaded and linked on an as needed
basis - Not all executable programs need to have
statically linked copies of the standard I/O
libraries - 4) Other possibilities include
- Paging
- Segmentation
20Paging
- Non-contiguous memory allocation
- Breaking up program into fixed size, relatively
small units (partitions) of RAM - Within paging, still thinking about loading all
of process into RAM at once - Real RAM memory typically divided into relatively
small physical frames - Fixed size blocks, e.g., 1024 bytes
- Address space of process divided into sequence of
logical pages - Frame size same as page size
21Example
- We might have a process with a 3 page logical
address space - And physical RAM consisting of 6 frames
1024 byte frame size
1024 byte page size
22Paged Memory Allocation
- Logical address space of process is contiguous
- However, in real memory, frames can be
non-contiguous - Each logical page of a process can be stored in
any (different) physical frame of RAM
23Example Continued
Physical Memory
Program (process)
- So, page 0 might be stored in frame 5
- And, page 1 might be stored in frame 1
- And, page 2 might be stored in frame 0
24MMU with Paging
MMU needs information about correspondences
between logical pages physical pages
How might the OS represent the information needed
by the MMU?
25Page Table
- Each process with an address space needs to have
a page table - Maintained by operating system (in kernel space)
- Part of the PCB for a process
- Memory management information
- Page table maps from logical page numbers of a
process to frame numbers of physical RAM - PageTableLogicalPageNumber PhysicalFrameNumber
26Example
27SolutionPage Table
- Page tables have one entry (row) per logical page
of the process - Each entry comprises (at least) a frame number
28(No Transcript)
29Example Still Continued
Physical Memory
Program (process)
With 1024 byte page/frame size What physical
addresses correspond to logical addresses 0,
1024, 3000?
(base 10 addresses)
30RAM Memory Element Alignment
- What is the structure of addresses of RAM
elements when - the elements (and addresses of elements) are
powers of 2 sizes, and - elements start at address 0 of physical RAM?
- Consider elements of size 2, 4, 8, 256, 1024
- From the address of the element, how do you tell
if an element is N byte aligned?
31Address Structure
- Usually the page/frame size is a power of two
- E.g., 1024, 2048, 4096
- And, pages and frames start at address 0
- This separates the structure of memory addresses
into two parts - a) Top bits page/frame number
- b) Bottom bits page/frame offset
- bits to index within a page (and frame)
- Example
- 16 bit logical address
- 1024 byte page size
- Offset is lower 10 bits 210 1024
log2(1024)10 - need 10 bits to index within a 1024 byte page
- Top 6 bits for page number
32Example
Physical Memory
With 1024 byte page/frame size, and 16 bit
addresses a) Give the page table (with entries
in base 10) b) Give the binary logical and
physical addresses for base 10 logical addresses
0, 1024, 2058?
Program (process)
(Arrows represent the page table)
33What Does the MMU With Paging Look Like?
34Example
- What about 32 bit logical addresses with a 4096
byte page size? - What are the
- Number of bits in a page number?
- Number of bits in an offset?
- Maximum number of logical pages per process?
- Maximum number of bytes for the logical address
space for a process?
Maximum number of physical frames per process?
35Memory Protection
- Memory protection in a paged environment is
accomplished in 2 ways - A) Only frames that map through the page table
can be accessed - B) Page table entries can be extended to include
protection information (page-table length
register can accomplish part of this goal) - Illegal page accesses are trapped by the
operating system (software interrupts)
36Memory Protection (Figure 8.12)
37(No Transcript)
38MMU
Figure 8.7 Paging Hardware
- Typically, not all of the page table is stored in
fast memory in the MMU - Instead, a cache (associative memory) is used to
store only some of the entries in the MMU - Called translation look-aside buffer (TLB)
39Translation Look-aside Buffer (TLB)
- Keep some page table entries in special, fast,
cache memory (part of MMU) - Fast associative memory (parallel lookup)
- Translation Look-aside Buffer (TLB)
- TLB entries contain (slightly modified) page
table entries - Add a key to each entry to make it a TLB entry
- Each entry
- key (logical page ), and value (physical frame
) - Operation of TLB Given a key (logical page ),
translates to a value (physical frame ) - Does this page gt frame mapping quickly!
- TLB is relatively small
- E.g., between 64 and 1024 TLB cache entries
40TLB Operation
- On a TLB hit, rapidly obtain frame for page
- Then, access RAM memory to access element
- On a TLB miss, have to do two RAM memory accesses
- First put ltkey, valuegt pair into TLB, from page
table (access to RAM memory for page table) - Then, access RAM memory to access element of data
- Generally then, with good locality of reference,
have only one access to RAM memory (to access
element
41e.g., load R1, address
MMU
logical address
CPU
Physical memory (RAM)
TLB miss
Figure 8.11 Paging Hardware with TLB (Modified)
42Effective Access Time (EAT)
- Memory access time (Effective Access Time) will
include (a) time for accessing RAM and (b) time
for accessing TLB - Need to take into account TLB hits TLB misses
- Calculate the effective (actual) time with which
memory is accessed - Need information about TLB hits and misses
- EAT P(hit) hit-time P(miss) miss-time
- Given
- Memory access 100 nanoseconds
- TLB access 20 nanoseconds
- 85 of the time we get a TLB hit
- What is hit-time? Miss-time? P(miss)? P(hit)?
- What is EAT?
43Assumption Underlying TLB
- In the last example, we assumed that there would
be 85 TLB hits - Why would we expect a relatively large percentage
of TLB hits? - Locality of reference!
- When program code and/or data that process is
using comprise a relatively small collection of
pages
44Context Switching
- When a process with a different address space is
started, we may not be able to use the same TLB
entries. Why? - Because the new address space has a different
page table, which provides different mappings
from logical page numbers to frame numbers. - What can be done?
- Some TLBs have entries that consist of a page
number, plus an ASID (address space identifier).
The ASID allows the TLB to hold entries from
multiple address spaces at the same time. - Flush the TLB entries-- all entries for the new
address space have to be cached can be expensive
in terms of time
45Thread vs. Heavy Weight Process
- Using paged memory management, what are the
advantages of threads as compared to heavy weight
processes? - Two issues here
- (1) amount of memory taken up, per process,
for the page table, - (2) context switch time is increased by having
to initially cache the page table entries into
TLB from RAM - Remember RAM memory (DRAM) is much slower than
high speed memory of CPU and MMU (SRAM)
46Fragmentation in Paging
- External
- With paging, external fragmentation problem is
solved - Every frame of memory can be used
- Internal
- Average of ½ frame of memory per process lost
- Because, in general, a process will not have a
size in bytes that is evenly divisible by the
page size - Page size is a factor
- Smaller pages, less space lost to fragmentation
- BUT Larger pages have less overhead (e.g., fewer
entries in page table) this may be a more
important issue
47Page Table Representation
- One strategy
- Each page table contains the full set of possible
entries, and this full page table is kept in RAM - Number of entries in page table
-
- E.g., 1024 byte page/frame size, 16 bit address
6 bit page number - 26 entries in each page table 64 entries
- Question What is the minimum size (in bytes) of
a per process page table for (a) 2048 and (b)
4096 byte page size, with 32 bit logical
physical addresses where the page table contains
a full set of possible entries?
48Alternative Page Table Structures
- Within what weve talked about, well need to
keep page tables contiguously allocated - However, in general, keeping a single
contiguously-allocated page table may not be
feasible - Why?
- Because much of our other RAM is allocated
non-contiguously in frames, we may not be able to
obtain such large contiguous regions of memory
49Two-Level Paging Page the Page Table (Fig. 8.14)
50Address Structure for Two Level Paging (p.
338-339)
offset
inner page
outer page
51Segmentation
- Paging divides memory into equal sized units
(pages, frames) - Segmentation divides memory into different sized
units, depending on program parts - E.g., stack, data area, code area
- See Figure 8.18 of text (p. 343)
- Logical addresses consist of pairs
- ltsegment number, offsetgt
- May have paging within segments
- e.g., Intel Pentium
52MMU
53- Given that segment number is upper 6 bits of
address and offset is lower 10 bits, convert
logical address to physical address for base 10
addresses 2,101 and 2,900