Midterm 3 Revision 1 - PowerPoint PPT Presentation

1 / 112
About This Presentation
Title:

Midterm 3 Revision 1

Description:

No internal state (or memory or history or whatever you want to call it) ... B to the S1 and S0 selection inputs of the MUX (A is most sig, S1 is most sig. ... – PowerPoint PPT presentation

Number of Views:32
Avg rating:3.0/5.0
Slides: 113
Provided by: Lee144
Category:
Tags: midterm | revision | sig

less

Transcript and Presenter's Notes

Title: Midterm 3 Revision 1


1
Midterm 3 Revision 1
Lecture 19
  • Prof. Sin-Min Lee
  • Department of Computer Science

2
Example of combinational and sequential logic
  • Combinational
  • input A, B
  • wait for clock edge
  • observe C
  • wait for another clock edge
  • observe C again will stay the same
  • Sequential
  • input A, B
  • wait for clock edge
  • observe C
  • wait for another clock edge
  • observe C again may be different

A
C
B
Clock
3
Basically
  • Combinational
  • No internal state (or memory or history or
    whatever you want to call it)
  • Output depends only on input
  • Sequential
  • Output depends on internal state
  • Probably not going to be on this midterm since
    formal lecture on it started last Thursday.

4
(No Transcript)
5
(No Transcript)
6
(No Transcript)
7
(No Transcript)
8
(No Transcript)
9
(No Transcript)
10
Midterm Gate Problem
Y
D Q¹ Q
I
T Q
Clock
11
Start
0
1
0
0
0
0
1
1
0
0
Start
I Q¹ Q Y
12
Clock Cycle 1
0
1
1
1
0
0
Note Q outputs are dependant on the state
of inputs present on the previous cycle.
1
0
1
1
I Q¹ Q Y
13
Clock Cycle 2
0
1
1
1
0
0
Note Q outputs are dependant on the state
of inputs present on the previous cycle.
1
0
1
1
I Q¹ Q Y
14
Clock Cycle 3
1
0
1
1
1
1
Note Q outputs are dependant on the state
of inputs present on the previous cycle.
0
0
1
1
I Q¹ Q Y
15
Clock Cycle 4
1
0
1
1
1
1
Note Q outputs are dependant on the state
of inputs present on the previous cycle.
0
0
1
1
I Q¹ Q Y
16
Clock Cycle 5
0
1
0
1
0
0
Note Q outputs are dependant on the state
of inputs present on the previous cycle.
1
1
0
0
I Q¹ Q Y
17
Clock Cycle 6
0
1
1
1
1
0
Note Q outputs are dependant on the state
of inputs present on the previous cycle.
1
0
0
0
I Q¹ Q Y
18
Clock Cycle 7
0
1
0
1
0
0
Note Q outputs are dependant on the state
of inputs present on the previous cycle.
1
1
0
0
I Q¹ Q Y
19
Clock Cycle 8
0
1
1
1
0
0
Note Q outputs are dependant on the state
of inputs present on the previous cycle.
1
0
1
1
I Q¹ Q Y
20
Some commonly used components
  • Decoders n inputs, 2n outputs.
  • the inputs are used to select which output is
    turned on. At any time exactly one output is on.
  • Multiplexors 2n inputs, n selection bits, 1
    output.
  • the selection bits determine which input will
    become the output.
  • Adder 2n inputs, 2n outputs.
  • Computer Arithmetic.

21
Multiplexer
  • Selects binary information from one of many
    input lines and directs it to a single output
    line.
  • Also known as the selector circuit,
  • Selection is controlled by a particular set of
    inputs lines whose depends on the of the data
    input lines.
  • For a 2n-to-1 multiplexer, there are 2n data
    input lines and n selection lines whose bit
    combination determines which input is selected.

22
MUX
Enable
2n Data Inputs
Data Output
n
Input Select
23
Remember the 2 4 Decoder?
Sel(3)
S1
Sel(2)
Sel(1)
S0
Sel(0)
Mutually Exclusive (Only one O/P asserted at any
time
24
4 to 1 MUX
DataFlow
D3D0
Dout
4
Control
4
2 - 4 Decoder
Sel(30)
2
S1S0
25
4-to-1 MUX (Gate level)
Control Section
Three of these signal inputs will always be 0.
The other will depend on the data value selected
26
Multiplexer (cont.)
  • Until now, we have examined single-bit data
    selected by a MUX. What if we want to select
    m-bit data/words?? Combine MUX blocks in
    parallel with common select and enable signals
  • Example Construct a logic circuit that selects
    between 2 sets of 4-bit inputs (see next slide
    for solution).

27
Example Quad 2-to-1 MUX
  • Uses four 4-to-1 MUXs with common select (S) and
    enable (E).
  • Select line chooses between Ais and Bis. The
    selected four-wire digital signal is sent to the
    Yis
  • Enable line turns MUX on and off (E1 is on).

28
Implementing Boolean functions with Multiplexers
  • Any Boolean function of n variables can be
    implemented using a 2n-1-to-1 multiplexer. A MUX
    is basically a decoder with outputs ORed
    together, hence this isnt surprising.
  • The SELECT signals generate the minterms of the
    function.
  • The data inputs identify which minterms are to be
    combined with an OR.

29
Example
  • F(X,Y,Z) XYZ XYZ XYZ XYZ
    Sm(1,2,6,7)
  • There are n3 inputs, thus we need a 22-to-1 MUX
  • The first n-1 (2) inputs serve as the selection
    lines

30
Efficient Method for implementing Boolean
functions
  • For an n-variable function (e.g., f(A,B,C,D))
  • Need a 2n-1 line MUX with n-1 select lines.
  • Enumerate function as a truth table with
    consistent ordering of variables (e.g., A,B,C,D)
  • Attach the most significant n-1 variables to the
    n-1 select lines (e.g., A,B,C)
  • Examine pairs of adjacent rows (only the least
    significant variable differs, e.g., D0 and D1).
  • Determine whether the function output for the
    (A,B,C,0) and (A,B,C,1) combination is (0,0),
    (0,1), (1,0), or (1,1).
  • Attach 0, D, D, or 1 to the data input
    corresponding to (A,B,C) respectively.

31
Another Example
  • Consider F(A,B,C) ?m(1,3,5,6). We can implement
    this function using a 4-to-1 MUX as follows.
  • The index is ABC. Apply A and B to the S1 and S0
    selection inputs of the MUX (A is most sig, S1 is
    most sig.)
  • Enumerate function in a truth table.

32
MUX Example (cont.)
When AB0, FC
When A0, B1, FC
When A1, B0, FC
When AB1, FC
33
MUX implementation of F(A,B,C) ?m(1,3,5,6)
A
B
C
C
F
C
C
34
1 input Decoder
Decoder
O0
I
O1
Treat I as a 1 bit integer i. The ith output will
be turned on (Oi1), the other one off.
35
1 input Decoder
O0
I
O1
36
2 input Decoder
Decoder
O0
I0
O1
O2
I1
O3
Treat I0I1 as a 2 bit integer i. The ith output
will be turned on (Oi1), all the others off.
37
2 input Decoder
I1
I0
O0 !I0 !I1
O1 !I0 I1
O2 I0 !I1
O3 I0 I1
38
3 Input Decoder
Decoder
O0
I0
O1
O2
I1
O3
O4
O5
I2
O6
O7
39
3-Decoder Partial Implementation
I2
I1
I0
O0
O1
. . .
40
2 Input Multiplexor
Inputs I0 and I1 Selector S Output O If S is
a 0 OI0 If S is a 1 OI1
Mux
I0
O
I1
S
41
2-Mux Logic Design
I1
I0
S
I0 !S
O
I1 S
42
4 Input Multiplexor
Inputs I0 I1 I2 I3 Selectors S0 S1 Output O
Mux
I0
I1
O
I2
I3
S0
S1
43
One Possible 4-Mux
2-Decoder
S0
I0
I1
S1
O
I2
I3
44
Adder
  • We want to build a box that can add two 32 bit
    numbers.
  • Assume 2s complement representation
  • We can start by building a 1 bit adder.

45
Addition
  • We need to build a 1 bit adder
  • compute binary addition of 2 bits.
  • We already know that the result is 2 bits.

This is addition!
A B O0 O1
46
One Implementation
A B
A
O0
B
!A
(!A B) (A !B)
B
O1
A
!B
47
Binary addition and our adder
1
1
Carry
01001 01101
10110
  • What we really want is something that can be used
    to implement the binary addition algorithm.
  • O0 is the carry
  • O1 is the sum

48
What about the second column?
1
1
Carry
01001 01101
10110
  • We are adding 3 bits
  • new bit is the carry from the first column.
  • The output is still 2 bits, a sum and a carry

49
Truth Table for Addition
50
Memory Management
  • Memory Partitioning
  • Fixed
  • Variable
  • Best fit, first-fit, largest-fit algorithms
  • Memory fragmentation
  • Overlays
  • Programs are divided into small logical pieces
    for execution
  • Pieces are loaded into memory as needed
  • Memory Relocation
  • Addresses have to be adjusted unless relative
    addressing is used

51
Memory Overlays
52
Virtual Memory
  • Virtual memory increases the apparent amount of
    memory by using far less expensive hard disk
    space
  • Provides for process separation
  • Demand paging
  • Pages brought into memory as needed
  • Page table
  • Keeps track of what is in memory and what is
    still out on hard disk

53
Frames and Pages
54
Frames and Pages
Binary Paging
55
Dynamic Address Translation
56
Page Table
Page Frame
Pages not in main memory page fault when accessed
Disk
1
2
3
4
5
6
7
8
9
10
11
Swap space
Virtual Memory Pages
57
Steps in Handling a Page Fault
58
Memory Allocation
  • Compile for overlays
  • Compile for fixed Partitions
  • Separate queue per partition
  • Single queue
  • Relocation and variable partitions
  • Dynamic contiguous allocation (bit maps versus
    linked lists)
  • Fragmentation issues
  • Swapping
  • Paging

59
Overlays
Secondary Storage
Overlay 1
0K
Overlay Manager
Main Program
Overlay 2
5k
7k
Overlay Area
Overlay 1
Overlay 2
Overlay 3
Overlay 1
Overlay 3
12k
60
Multiprogramming with Fixed Partitions
0k
  • Divide memory into n (possible unequal)
    partitions.
  • Problem
  • Fragmentation

4k
16k
64k
Free Space
128k
61
Fixed Partitions
Legend
0k
Free Space
4k
16k
Internalfragmentation (cannot be reallocated)
64k
128k
62
Fixed Partition Allocation Implementation Issues
  • Separate input queue for each partition
  • Requires sorting the incoming jobs and putting
    them into separate queues
  • Inefficient utilization of memory
  • when the queue for a large partition is empty but
    the queue for a small partition is full. Small
    jobs have to wait to get into memory even though
    plenty of memory is free.
  • One single input queue for all partitions.
  • Allocate a partition where the job fits in.
  • Best Fit
  • Worst Fit
  • First Fit

63
Relocation
  • Correct starting address when a program starts in
    memory
  • Different jobs will run at different addresses
  • When a program is linked, the linker must know at
    what address the program will begin in memory.
  • Logical addresses, Virtual addresses
  • Logical address space , range (0 to max)
  • Physical addresses, Physical address space
  • range (R0 to Rmax) for base value R.
  • User program never sees the real physical
    addresses
  • Memory-management unit (MMU)
  • map virtual to physical addresses.
  • Relocation register
  • Mapping requires hardware (MMU) with the base
    register

64
Relocation Register
Memory
Base Register
BA
CPU Instruction Address
Physical Address
Logical Address

MA
MABA
65
Question 1 - Protection
  • Problem
  • How to prevent a malicious process to write or
    jump into other user's or OS partitions
  • Solution
  • Base bounds registers

66
Base Bounds Registers
Bounds Register
Base Register
Memory
Base Address
Logical Address LA
Base Address BA
CPU Address
MABA
lt

Memory Address MA
Physical Address PA
Limit Address
Fault
67
Contiguous Allocation and Variable Partitions
Bit Maps versus Linked Lists
  • Part of memory with 5 processes, 3 holes
  • tick marks show allocation units
  • shaded regions are free
  • Corresponding bit map

68
More on Memory Management with Linked Lists
  • Four neighbor combinations for the terminating
    process X

69
Contiguous Variable Partition Allocation schemes
  • Bitmap and link list
  • Which one occupies more space?
  • Depending on the individual memory allocation
    scenario. In most cases, bitmap usually occupies
    more space.
  • Which one is faster reclaim freed space?
  • On average, bitmap is faster because it just
    needs to set the corresponding bits
  • Which one is faster to find a free hole?
  • On average, a link list is faster because we can
    link all free holes together

70
Storage Placement Strategies
  • Best fit
  • Use the hole whose size is equal to the need, or
    if none is equal, the whole that is larger but
    closest in size.
  • Rationale?
  • First fit
  • Use the first available hole whose size is
    sufficient to meet the need
  • Rationale?
  • Worst fit
  • Use the largest available hole
  • Rationale?

71
Storage Placement Strategies
  • Every placement strategy has its own problem
  • Best fit
  • Creates small holes that cant be used
  • Worst Fit
  • Gets rid of large holes making it difficult to
    run large programs
  • First Fit
  • Creates average size holes

72
Locality of Reference
  • Most memory references confined to small region
  • Well-written program in small loop, procedure or
    function
  • Data likely in array and variables stored
    together
  • Working set
  • Number of pages sufficient to run program
    normally, i.e., satisfy locality of a particular
    program

73
Page Replacement Algorithms
  • Page fault - page is not in memory and must be
    loaded from disk
  • Algorithms to manage swapping
  • First-In, First-Out FIFO Beladys Anomaly
  • Least Recently Used LRU
  • Least Frequently Used LFU
  • Not Used Recently NUR
  • Referenced bit, Modified (dirty) bit
  • Second Chance Replacement algorithms
  • Thrashing
  • too many page faults affect system performance

74
Virtual Memory Tradeoffs
  • Disadvantages
  • SWAP file takes up space on disk
  • Paging takes up resources of the CPU
  • Advantages
  • Programs share memory space
  • More programs run at the same time
  • Programs run even if they cannot fit into memory
    all at once
  • Process separation

75
Virtual Memory vs. Caching
  • Cache speeds up memory access
  • Virtual memory increases amount of perceived
    storage
  • Independence from the configuration and capacity
    of the memory system
  • Low cost per bit compared to main memory

76
How Bad Is Fragmentation?
  • Statistical arguments - Random sizes
  • First-fit
  • Given N allocated blocks
  • 0.5?N blocks will be lost because of
    fragmentation
  • Known as 50 RULE

77
Solve Fragmentation w. Compaction
Free
Monitor
Job 3
Job 5
Job 6
Job 7
Job 8
5
Free
Monitor
Job 3
Job 5
Job 6
Job 7
Job 8
6
Free
Monitor
Job 3
Job 5
Job 6
Job 7
Job 8
7
Free
Monitor
Job 3
Job 5
Job 6
Job 7
Job 8
8
Free
Monitor
Job 3
Job 5
Job 6
Job 7
Job 8
9
78
Storage Management Problems
  • Fixed partitions suffer from
  • internal fragmentation
  • Variable partitions suffer from
  • external fragmentation
  • Compaction suffers from
  • overhead

79
Question
  • What if there are more processes than what could
    fit into the memory?

80
Swapping
Disk
Monitor
User Partition
81
Swapping
Disk
Monitor
User 1
User Partition
82
Swapping
Disk
Monitor
User 1
User Partition
User 1
83
Swapping
Disk
Monitor
User 1
User Partition
User 1
User 2
84
Swapping
Disk
Monitor
User 1
User Partition
User 2
User 2
85
Swapping
Disk
Monitor
User 1
User Partition
User 2
User 2
86
Swapping
Disk
Monitor
User 1
User Partition
User 1
User 2
87
Paging Request
88
Paging
89
Paging
90
Paging
91
Page Mapping Hardware
Virtual Memory
Virtual Address (P,D)
P
Page Table
D
P
P?F
Physical Memory
F
Physical Address (F,D)
D
92
Page Mapping Hardware
Virtual Memory
Virtual Address (004006)
Page Table
004
006
4
4?5
Physical Memory
005
Physical Address (F,D)
Page size 1000 Number of Possible Virtual Pages
1000 Number of Page Frames 8
006
93
Page Fault
  • Access a virtual page that is not mapped into any
    physical page
  • A fault is triggered by hardware
  • Page fault handler (in OSs VM subsystem)
  • Find if there is any free physical page available
  • If no, evict some resident page to disk (swapping
    space)
  • Allocate a free physical page
  • Load the faulted virtual page to the prepared
    physical page
  • Modify the page table

94
Paging Issues
  • Page size is 2n
  • usually 512 bytes, 1 KB, 2 KB, 4 KB, or 8 KB
  • E.g. 32 bit VM address may have 220 (1 MB) pages
    with 4k (212) bytes per page
  • Page table
  • 220 page entries take 222 bytes (4 MB)
  • page frames must map into real memory
  • Page Table base register must be changed for
    context switch
  • No external fragmentation internal fragmentation
    on last page only

95
Virtual-to-Physical Lookups
  • Programs only know virtual addresses
  • The page table can be extremely large
  • Each virtual address must be translated
  • May involve walking hierarchical page table
  • Page table stored in memory
  • So, each program memory access requires several
    actual memory accesses
  • Solution cache active part of page table
  • TLB is an associative memory

96
Translation Lookaside Buffer (TLB)
Virtual address
Miss
. . .
TLB
Hit
Physical address
97
TLB Function
  • If a virtual address is presented to MMU, the
    hardware checks TLB by comparing all entries
    simultaneously (in parallel).
  • If match is valid, the page is taken from TLB
    without going through page table.
  • If match is not valid
  • MMU detects miss and does a page table lookup.
  • It then evicts one page out of TLB and replaces
    it with the new entry, so that next time that
    page is found in TLB.

98
Page Mapping Hardware
Virtual Memory Address (P,D)
Page Table
Associative Lookup
P
P?F
Physical Address (F,D)
99
Effective Access Time
  • TLB lookup time s time unit
  • Memory cycle m µs
  • TLB Hit ratio h
  • Effective access time
  • Eat (m s) h (2m s)(1 h)
  • Eat 2m s  m h

100
Paging
  • Provide user with virtual memory that is as big
    as user needs
  • Store virtual memory on disk
  • Cache parts of virtual memory being used in real
    memory
  • Load and store cached virtual memory without user
    program intervention

101
Principle of Locality
  • Program and data references within a process tend
    to cluster
  • Only a few pieces of a process will be needed
    over a short period of time
  • Possible to make intelligent guesses about which
    pieces will be needed in the future
  • This suggests that virtual memory may work
    efficiently

102
Support Needed forVirtual Memory
  • Hardware must support paging and segmentation
  • Operating system must be able to management the
    movement of pages and/or segments between
    secondary memory and main memory

103
Paging
  • Each process has its own page table
  • Each page table entry contains the frame number
    of the corresponding page in main memory
  • A bit is needed to indicate whether the page is
    in main memory or not

104
Fetch Policy
  • Fetch Policy
  • Determines when a page should be brought into
    memory
  • Demand paging only brings pages into main memory
    when a reference is made to a location on the
    page
  • Many page faults when process first started
  • Prepaging brings in more pages than needed
  • More efficient to bring in pages that reside
    contiguously on the disk

105
Placement Policy
  • Determines where in real memory a process piece
    is to reside
  • Important in a segmentation system
  • Paging or combined paging with segmentation
    hardware performs address translation

106
Replacement Policy
  • Placement Policy
  • Which page is replaced?
  • Page removed should be the page least likely to
    be referenced in the near future
  • Most policies predict the future behavior on the
    basis of past behavior

107
Replacement Policy
  • Frame Locking
  • If frame is locked, it may not be replaced
  • Kernel of the operating system
  • Control structures
  • I/O buffers
  • Associate a lock bit with each frame

108
Basic Replacement Algorithms
  • Optimal policy
  • Selects for replacement that page for which the
    time to the next reference is the longest
  • Impossible to have perfect knowledge of future
    events

109
Basic Replacement Algorithms
  • Least Recently Used (LRU)
  • Replaces the page that has not been referenced
    for the longest time
  • By the principle of locality, this should be the
    page least likely to be referenced in the near
    future
  • Each page could be tagged with the time of last
    reference. This would require a great deal of
    overhead.

110
Basic Replacement Algorithms
  • First-in, first-out (FIFO)
  • Treats page frames allocated to a process as a
    circular buffer
  • Pages are removed in round-robin style
  • Simplest replacement policy to implement
  • Page that has been in memory the longest is
    replaced
  • These pages may be needed again very soon

111
Basic Replacement Algorithms
  • Clock Policy
  • Additional bit called a use bit
  • When a page is first loaded in memory, the use
    bit is set to 1
  • When the page is referenced, the use bit is set
    to 1
  • When it is time to replace a page, the first
    frame encountered with the use bit set to 0 is
    replaced.
  • During the search for replacement, each use bit
    set to 1 is changed to 0

112
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com