Chapter 12 Memory Management - PowerPoint PPT Presentation

1 / 72
About This Presentation
Title:

Chapter 12 Memory Management

Description:

Data Structures and Algorithms in Java. Chapter ... The heap is the region of main memory from which portions of memory are ... Traversing each linked structure ... – PowerPoint PPT presentation

Number of Views:117
Avg rating:3.0/5.0
Slides: 73
Provided by: cynd4
Category:

less

Transcript and Presenter's Notes

Title: Chapter 12 Memory Management


1
Chapter 12Memory Management
2
Objectives
  • Discuss the following topics
  • Memory Management
  • The Sequential-Fit Methods
  • The Nonsequential-Fit Methods
  • Garbage Collection
  • Case Study An In-Place Garbage Collector

3
Memory Management
  • The heap is the region of main memory from which
    portions of memory are dynamically allocated upon
    request of a program
  • The memory manager is responsible for
  • The maintenance of free memory blocks
  • Assigning specific memory blocks to the user
    programs
  • Cleaning memory from unneeded blocks to return
    them to the memory pool

4
Memory Management (continued)
  • The memory manager is responsible for
  • Scheduling access to shared data,
  • Moving code and data between main and secondary
    memory
  • Keeping one process away from another
  • External fragmentation amounts to the presence of
    wasted space between allocated segments of memory

5
Memory Management (continued)
  • Internal fragmentation amounts to the presence of
    unused memory inside the segments

6
The Sequential-Fit Methods
  • In the sequential-fit methods, all available
    memory blocks are linked, and the list is
    searched to find a block whose size is larger
    than or the same as the requested size
  • The first-fit algorithm allocates the first block
    of memory large enough to meet the request
  • The best-fit algorithm allocates a block that is
    closest in size to the request

7
The Sequential-Fit Methods (continued)
  • The worst-fit method finds the largest block on
    the list so that the remaining part is large
    enough to be used in later requests
  • The next-fit method allocates the next available
    block that is sufficiently large
  • The way the blocks are organized on the list
    determines how fast the search for an available
    block succeeds or fails

8
The Sequential-Fit Methods (continued)
Figure 12-1 Memory allocation using
sequential-fit methods
9
The Nonsequential-Fit Methods
  • An adaptive exact-fit technique dynamically
    creates and adjusts storage block lists that fit
    the requests exactly
  • In adaptive exact-fit, a size-list of block lists
    of a particular size returned to the memory pool
    during the last T allocations is maintained
  • The exact-fit method disposes of entire block
    lists if no request comes for a block from this
    list in the last T allocations

10
The Nonsequential-Fit Methods (continued)
  • t 0
  • allocate (reqSize)
  • t
  • if a block list b1 with reqSize blocks is on
    sizeList
  • lastref(b1) t
  • b head of blocks(b1)
  • if b was the only block accessible from b1
  • detach b1 from sizeList
  • else b search-memory-for-a-block-of(reqSize)
  • dispose of all block lists on sizeList for which
    t - lastref(b1) lt T
  • return b

11
The Nonsequential-Fit Methods (continued)
Figure 12-2 An example configuration of a
size-list and heap created
by the adaptive exact-fit method
12
Buddy Systems
  • Nonsequential memory management methods or buddy
    systems assign memory in sequential slices, and
    divide it into two buddies that are merged
    whenever possible
  • In the buddy system, two buddies are never free
  • A block can have either a buddy used by the
    program or none

13
Buddy Systems (continued)
  • In the binary buddy system each block of memory
    (except the entire memory) is coupled with a
    buddy of the same size that participates with the
    block in reserving and returning chunks of memory

14
Buddy Systems (continued)
Figure 12-3 Block structure in the binary buddy
system
15
Buddy Systems (continued)
Figure 12-4 Reserving three blocks of memory
using the binary buddy
system
16
Buddy Systems (continued)
Figure 12-4 Reserving three blocks of memory
using the binary buddy
system (continued)
17
Buddy Systems (continued)
Figure 12-4 Reserving three blocks of memory
using the binary buddy
system (continued)
18
Buddy Systems (continued)
Figure 12-5 (a) Returning a block to the pool of
blocks, (b) resulting in coalescing one
block with its buddy
19
Buddy Systems (continued)
Figure 12-5 (c) Returning another block leads to
two coalescings (continued)
20
Buddy Systems (continued)
  • availi -1 for i 0, . . . , m-1
  • availm first address in memory
  • reserveFib(reqSize)
  • availSize the position of the first Fibonacci
    number greater than reqSize
  • for which availavailSize gt -1

21
Buddy Systems (continued)
Figure 12-6 (a) Splitting a block of size Fib(k)
into two buddies using the buddy-bit
and the memory-bit
22
Buddy Systems (continued)
Figure 12-6 (b) Coalescing two buddies utilizing
information stored in buddy- and
memory-bits
23
Buddy Systems (continued)
  • A weighted buddy system is to decrease the amount
    of internal fragmentation by allowing more block
    sizes than in the binary system
  • A buddy system that takes a middle course between
    the binary system and the weighted system is a
    dual buddy system

24
Garbage Collection
  • A garbage collector is automatically invoked to
    collect unused memory cells when the program is
    idle or when memory resources are exhausted
  • References to all linked structures currently
    utilized by the program are stored in a root set,
    which contains all root pointers

25
Garbage Collection (continued)
  • There are two phases of garbage collection
  • The marking phase to identify all currently
    used cells
  • The reclamation phase when all unmarked cells
    are returned to the memory pool this phase can
    also include heap compaction

26
Mark-and-Sweep
  • Memory cells currently in use are marked by
  • Traversing each linked structure
  • Then the memory is swept to glean unused
    (garbage) cells that put them together in a
    memory pool
  • marking (node)
  • if node is not marked
  • mark node
  • if node is not an atom
  • marking(head(node))
  • marking(tail(node))

27
Mark-and-Sweep (continued)
Figure 12-7 An example of execution of the Schorr
and Waite algorithm for marking used memory
cells
28
Mark-and-Sweep (continued)
Figure 12-7 An example of execution of the Schorr
and Waite algorithm for marking used memory
cells (continued)
29
Mark-and-Sweep (continued)
Figure 12-7 An example of execution of the Schorr
and Waite algorithm for marking used memory
cells (continued)
30
Mark-and-Sweep (continued)
Figure 12-7 An example of execution of the Schorr
and Waite algorithm for marking used memory
cells (continued)
31
Mark-and-Sweep (continued)
Figure 12-7 An example of execution of the Schorr
and Waite algorithm for marking used memory
cells (continued)
32
Mark-and-Sweep (continued)
Figure 12-7 An example of execution of the Schorr
and Waite algorithm for marking used memory
cells (continued)
33
Space Reclamation
  • sweep()
  • for each location from the last to the first
  • if mark(location) is 0
  • insert location in front of availList
  • else set mark(location) to 0

34
Compaction
Figure 12-8 An example of heap compaction
35
Copying Methods
  • The stop-and-copy algorithm divides the heap into
    two semispaces, one of which is only used for
    allocating memory
  • Lists can be copied using breadth-first traversal
    that allows it to combine two tasks copying
    lists and updating references
  • This algorithm requires no marking phase and no
    stack

36
Copying Methods (continued)
Figure 12-9 (a) A situation in the heap before
copying the contents of
cells in use from semispace1 to semispace2
37
Copying Methods (continued)
Figure 12-9 (b) the situation right after
copying all used cells are
packed contiguously (continued)
38
Incremental Garbage Collection
  • Incremental garbage collectors, whose execution
    is interleaved with the execution of the program,
    is desirable for a fast response to a program
  • After the collector partially processes some
    lists, the program can change or mutate those
    lists using a program called a mutator
  • The Baker algorithm uses two semispaces, called
    fromspace and tospace, which are both active to
    ensure proper cooperation between the mutator and
    the collector

39
Incremental Garbage Collection (continued)
Figure 12-10 A situation in memory (a) before and
(b) after allocating a cell
with head and tail references referring to cells
P and Q in tospace
according to the Baker algorithm
40
Incremental Garbage Collection (continued)
Figure 12-10 A situation in memory (a) before and
(b) after allocating a cell
with head and tail references referring to cells
P and Q in tospace
according to the Baker algorithm (continued)
41
Incremental Garbage Collection (continued)
  • The mutator is preceded by a read barrier, which
    precludes utilizing references to cells in
    fromspace
  • The generational garbage collection technique
    divides all allocated cells into at least two
    generations and focuses its attention on the
    youngest generation, which generates most of the
    garbage

42
Incremental Garbage Collection (continued)
Figure 12-11 Changes performed by the Baker
algorithm when addresses
P and Q refer to cells in fromspace, P to an
already copied cell, Q to
a cell still in fromspace
43
Incremental Garbage Collection (continued)
Figure 12-11 Changes performed by the Baker
algorithm when addresses
P and Q refer to cells in fromspace, P to an
already copied cell, Q to
a cell still in fromspace (continued)
44
Incremental Garbage Collection (continued)
Figure 12-12 A situation in three regions (a)
before and (b) after copying
reachable cells from region ri to region ri in
the Lieberman-Hewitt
technique of generational garbage collection
45
Incremental Garbage Collection (continued)
Figure 12-12 A situation in three regions (a)
before and (b) after copying
reachable cells from region ri to region ri in
the Lieberman- Hewitt
technique of generational garbage collection
(continued)
46
Noncopying Methods
  • createRootPtr(p,q,r) // Lisps cons
  • if collector is in the marking phase
  • mark up to k1 cells
  • else if collector is in the sweeping phase
  • sweep up to k2 cells
  • else if the number of cells on availList is low
  • push all root pointers onto collectors stack
    st
  • p first cell on availList
  • head(p) q
  • tail(p) r
  • mark p if it is in the unswept portion of heap

47
Noncopying Methods (continued)
Figure 12-13 An inconsistency that results if, in
Yuasas noncopying
incremental garbage collector, a stack is not
used to record cells
possibly unprocessed during the marking phase
48
Noncopying Methods (continued)
Figure 12-13 An inconsistency that results if, in
Yuasas noncopying
incremental garbage collector, a stack is not
used to record cells
possibly unprocessed during the marking phase
(continued)
49
Noncopying Methods (continued)
Figure 12-14 Memory changes during the sweeping
phase using Yuasas method
50
Noncopying Methods (continued)
Figure 12-14 Memory changes during the sweeping
phase using Yuasas
method (continued)
51
Case Study An In-Place Garbage Collector
  • roots 1 5 3
  • (0 -1 2 false false 0 0) (1 5 4 false false 1
    4)
  • (2 0 -1 false false 2 2) (3 4 -1 true false
    130)
  • (4 1 3 true false 129) (5 -1 1 false false 5 1)
  • freeCells (0 0 0)(2 2 2)
  • nonFreeCells (5 5 1)(1 1 4)(4 129)(3 130)

52
Case Study An In-Place Garbage Collector
(continued)
Figure 12-15 An example of a situation on the heap
53
Case Study An In-Place Garbage Collector
(continued)
Figure 12-15 An example of a situation on the
heap (continued)
54
Case Study An In-Place Garbage Collector
(continued)
Figure 12-16 Implementation of an in-place
garbage collector
55
Case Study An In-Place Garbage Collector
(continued)
Figure 12-16 Implementation of an in-place
garbage collector (continued)
56
Case Study An In-Place Garbage Collector
(continued)
Figure 12-16 Implementation of an in-place
garbage collector (continued)
57
Case Study An In-Place Garbage Collector
(continued)
Figure 12-16 Implementation of an in-place
garbage collector (continued)
58
Case Study An In-Place Garbage Collector
(continued)
Figure 12-16 Implementation of an in-place
garbage collector (continued)
59
Case Study An In-Place Garbage Collector
(continued)
Figure 12-16 Implementation of an in-place
garbage collector (continued)
60
Case Study An In-Place Garbage Collector
(continued)
Figure 12-16 Implementation of an in-place
garbage collector (continued)
61
Case Study An In-Place Garbage Collector
(continued)
Figure 12-16 Implementation of an in-place
garbage collector (continued)
62
Case Study An In-Place Garbage Collector
(continued)
Figure 12-16 Implementation of an in-place
garbage collector (continued)
63
Case Study An In-Place Garbage Collector
(continued)
Figure 12-16 Implementation of an in-place
garbage collector (continued)
64
Case Study An In-Place Garbage Collector
(continued)
Figure 12-16 Implementation of an in-place
garbage collector (continued)
65
Case Study An In-Place Garbage Collector
(continued)
Figure 12-16 Implementation of an in-place
garbage collector (continued)
66
Case Study An In-Place Garbage Collector
(continued)
Figure 12-16 Implementation of an in-place
garbage collector (continued)
67
Case Study An In-Place Garbage Collector
(continued)
Figure 12-16 Implementation of an in-place
garbage collector (continued)
68
Case Study An In-Place Garbage Collector
(continued)
Figure 12-16 Implementation of an in-place
garbage collector (continued)
69
Case Study An In-Place Garbage Collector
(continued)
Figure 12-16 Implementation of an in-place
garbage collector (continued)
70
Summary
  • The heap is the region of main memory from which
    portions of memory are dynamically allocated upon
    request of a program
  • External fragmentation amounts to the presence of
    wasted space between allocated segments of memory
  • Internal fragmentation amounts to the presence of
    unused memory inside the segments

71
Summary (continued)
  • In the sequential-fit methods, all available
    memory blocks are linked, and the list is
    searched to find a block whose size is larger
    than or the same as the requested size
  • An adaptive exact-fit technique dynamically
    creates and adjusts storage block lists that fit
    the requests exactly
  • Nonsequential memory management methods or buddy
    systems assign memory in sequential slices, and
    divide it into two buddies that are merged
    whenever possible

72
Summary (continued)
  • A garbage collector is automatically invoked to
    collect unused memory cells when the program is
    idle or when memory resources are exhausted
  • The stop-and-copy algorithm divides the heap into
    two semispaces, with one only used for allocating
    memory
  • Incremental garbage collectors, whose execution
    is interleaved with the execution of the program,
    is desirable for a fast response to a program
Write a Comment
User Comments (0)
About PowerShow.com