Title: Midterm 3 Revision 1
1Midterm 3 Revision 1
Lecture 19
- Prof. Sin-Min Lee
- Department of Computer Science
2Example of combinational and sequential logic
- Combinational
- input A, B
- wait for clock edge
- observe C
- wait for another clock edge
- observe C again will stay the same
- Sequential
- input A, B
- wait for clock edge
- observe C
- wait for another clock edge
- observe C again may be different
A
C
B
Clock
3Basically
- Combinational
- No internal state (or memory or history or
whatever you want to call it) - Output depends only on input
- Sequential
- Output depends on internal state
- Probably not going to be on this midterm since
formal lecture on it started last Thursday.
4(No Transcript)
5(No Transcript)
6(No Transcript)
7(No Transcript)
8(No Transcript)
9(No Transcript)
10Midterm Gate Problem
Y
D Q¹ Q
I
T Q
Clock
11Start
0
1
0
0
0
0
1
1
0
0
Start
I Q¹ Q Y
12Clock Cycle 1
0
1
1
1
0
0
Note Q outputs are dependant on the state
of inputs present on the previous cycle.
1
0
1
1
I Q¹ Q Y
13Clock Cycle 2
0
1
1
1
0
0
Note Q outputs are dependant on the state
of inputs present on the previous cycle.
1
0
1
1
I Q¹ Q Y
14Clock Cycle 3
1
0
1
1
1
1
Note Q outputs are dependant on the state
of inputs present on the previous cycle.
0
0
1
1
I Q¹ Q Y
15Clock Cycle 4
1
0
1
1
1
1
Note Q outputs are dependant on the state
of inputs present on the previous cycle.
0
0
1
1
I Q¹ Q Y
16Clock Cycle 5
0
1
0
1
0
0
Note Q outputs are dependant on the state
of inputs present on the previous cycle.
1
1
0
0
I Q¹ Q Y
17Clock Cycle 6
0
1
1
1
1
0
Note Q outputs are dependant on the state
of inputs present on the previous cycle.
1
0
0
0
I Q¹ Q Y
18Clock Cycle 7
0
1
0
1
0
0
Note Q outputs are dependant on the state
of inputs present on the previous cycle.
1
1
0
0
I Q¹ Q Y
19Clock Cycle 8
0
1
1
1
0
0
Note Q outputs are dependant on the state
of inputs present on the previous cycle.
1
0
1
1
I Q¹ Q Y
20Some commonly used components
- Decoders n inputs, 2n outputs.
- the inputs are used to select which output is
turned on. At any time exactly one output is on. - Multiplexors 2n inputs, n selection bits, 1
output. - the selection bits determine which input will
become the output. - Adder 2n inputs, 2n outputs.
- Computer Arithmetic.
21Multiplexer
- Selects binary information from one of many
input lines and directs it to a single output
line. - Also known as the selector circuit,
- Selection is controlled by a particular set of
inputs lines whose depends on the of the data
input lines. - For a 2n-to-1 multiplexer, there are 2n data
input lines and n selection lines whose bit
combination determines which input is selected.
22MUX
Enable
2n Data Inputs
Data Output
n
Input Select
23Remember the 2 4 Decoder?
Sel(3)
S1
Sel(2)
Sel(1)
S0
Sel(0)
Mutually Exclusive (Only one O/P asserted at any
time
244 to 1 MUX
DataFlow
D3D0
Dout
4
Control
4
2 - 4 Decoder
Sel(30)
2
S1S0
254-to-1 MUX (Gate level)
Control Section
Three of these signal inputs will always be 0.
The other will depend on the data value selected
26Multiplexer (cont.)
- Until now, we have examined single-bit data
selected by a MUX. What if we want to select
m-bit data/words?? Combine MUX blocks in
parallel with common select and enable signals - Example Construct a logic circuit that selects
between 2 sets of 4-bit inputs (see next slide
for solution).
27Example Quad 2-to-1 MUX
- Uses four 4-to-1 MUXs with common select (S) and
enable (E). - Select line chooses between Ais and Bis. The
selected four-wire digital signal is sent to the
Yis - Enable line turns MUX on and off (E1 is on).
28Implementing Boolean functions with Multiplexers
- Any Boolean function of n variables can be
implemented using a 2n-1-to-1 multiplexer. A MUX
is basically a decoder with outputs ORed
together, hence this isnt surprising. - The SELECT signals generate the minterms of the
function. - The data inputs identify which minterms are to be
combined with an OR.
29Example
- F(X,Y,Z) XYZ XYZ XYZ XYZ
Sm(1,2,6,7) - There are n3 inputs, thus we need a 22-to-1 MUX
- The first n-1 (2) inputs serve as the selection
lines
30Efficient Method for implementing Boolean
functions
- For an n-variable function (e.g., f(A,B,C,D))
- Need a 2n-1 line MUX with n-1 select lines.
- Enumerate function as a truth table with
consistent ordering of variables (e.g., A,B,C,D) - Attach the most significant n-1 variables to the
n-1 select lines (e.g., A,B,C) - Examine pairs of adjacent rows (only the least
significant variable differs, e.g., D0 and D1). - Determine whether the function output for the
(A,B,C,0) and (A,B,C,1) combination is (0,0),
(0,1), (1,0), or (1,1). - Attach 0, D, D, or 1 to the data input
corresponding to (A,B,C) respectively.
31Another Example
- Consider F(A,B,C) ?m(1,3,5,6). We can implement
this function using a 4-to-1 MUX as follows. - The index is ABC. Apply A and B to the S1 and S0
selection inputs of the MUX (A is most sig, S1 is
most sig.) - Enumerate function in a truth table.
32MUX Example (cont.)
When AB0, FC
When A0, B1, FC
When A1, B0, FC
When AB1, FC
33MUX implementation of F(A,B,C) ?m(1,3,5,6)
A
B
C
C
F
C
C
341 input Decoder
Decoder
O0
I
O1
Treat I as a 1 bit integer i. The ith output will
be turned on (Oi1), the other one off.
351 input Decoder
O0
I
O1
362 input Decoder
Decoder
O0
I0
O1
O2
I1
O3
Treat I0I1 as a 2 bit integer i. The ith output
will be turned on (Oi1), all the others off.
372 input Decoder
I1
I0
O0 !I0 !I1
O1 !I0 I1
O2 I0 !I1
O3 I0 I1
383 Input Decoder
Decoder
O0
I0
O1
O2
I1
O3
O4
O5
I2
O6
O7
393-Decoder Partial Implementation
I2
I1
I0
O0
O1
. . .
402 Input Multiplexor
Inputs I0 and I1 Selector S Output O If S is
a 0 OI0 If S is a 1 OI1
Mux
I0
O
I1
S
412-Mux Logic Design
I1
I0
S
I0 !S
O
I1 S
424 Input Multiplexor
Inputs I0 I1 I2 I3 Selectors S0 S1 Output O
Mux
I0
I1
O
I2
I3
S0
S1
43One Possible 4-Mux
2-Decoder
S0
I0
I1
S1
O
I2
I3
44Adder
- We want to build a box that can add two 32 bit
numbers. - Assume 2s complement representation
- We can start by building a 1 bit adder.
45Addition
- We need to build a 1 bit adder
- compute binary addition of 2 bits.
- We already know that the result is 2 bits.
This is addition!
A B O0 O1
46One Implementation
A B
A
O0
B
!A
(!A B) (A !B)
B
O1
A
!B
47Binary addition and our adder
1
1
Carry
01001 01101
10110
- What we really want is something that can be used
to implement the binary addition algorithm. - O0 is the carry
- O1 is the sum
48What about the second column?
1
1
Carry
01001 01101
10110
- We are adding 3 bits
- new bit is the carry from the first column.
- The output is still 2 bits, a sum and a carry
49Truth Table for Addition
50Memory Management
- Memory Partitioning
- Fixed
- Variable
- Best fit, first-fit, largest-fit algorithms
- Memory fragmentation
- Overlays
- Programs are divided into small logical pieces
for execution - Pieces are loaded into memory as needed
- Memory Relocation
- Addresses have to be adjusted unless relative
addressing is used
51Memory Overlays
52Virtual Memory
- Virtual memory increases the apparent amount of
memory by using far less expensive hard disk
space - Provides for process separation
- Demand paging
- Pages brought into memory as needed
- Page table
- Keeps track of what is in memory and what is
still out on hard disk
53Frames and Pages
54Frames and Pages
Binary Paging
55Dynamic Address Translation
56Page Table
Page Frame
Pages not in main memory page fault when accessed
Disk
1
2
3
4
5
6
7
8
9
10
11
Swap space
Virtual Memory Pages
57Steps in Handling a Page Fault
58Memory Allocation
- Compile for overlays
- Compile for fixed Partitions
- Separate queue per partition
- Single queue
- Relocation and variable partitions
- Dynamic contiguous allocation (bit maps versus
linked lists) - Fragmentation issues
- Swapping
- Paging
59Overlays
Secondary Storage
Overlay 1
0K
Overlay Manager
Main Program
Overlay 2
5k
7k
Overlay Area
Overlay 1
Overlay 2
Overlay 3
Overlay 1
Overlay 3
12k
60Multiprogramming with Fixed Partitions
0k
- Divide memory into n (possible unequal)
partitions. - Problem
- Fragmentation
4k
16k
64k
Free Space
128k
61Fixed Partitions
Legend
0k
Free Space
4k
16k
Internalfragmentation (cannot be reallocated)
64k
128k
62Fixed Partition Allocation Implementation Issues
- Separate input queue for each partition
- Requires sorting the incoming jobs and putting
them into separate queues - Inefficient utilization of memory
- when the queue for a large partition is empty but
the queue for a small partition is full. Small
jobs have to wait to get into memory even though
plenty of memory is free. - One single input queue for all partitions.
- Allocate a partition where the job fits in.
- Best Fit
- Worst Fit
- First Fit
63Relocation
- Correct starting address when a program starts in
memory - Different jobs will run at different addresses
- When a program is linked, the linker must know at
what address the program will begin in memory. - Logical addresses, Virtual addresses
- Logical address space , range (0 to max)
- Physical addresses, Physical address space
- range (R0 to Rmax) for base value R.
- User program never sees the real physical
addresses - Memory-management unit (MMU)
- map virtual to physical addresses.
- Relocation register
- Mapping requires hardware (MMU) with the base
register
64Relocation Register
Memory
Base Register
BA
CPU Instruction Address
Physical Address
Logical Address
MA
MABA
65Question 1 - Protection
- Problem
- How to prevent a malicious process to write or
jump into other user's or OS partitions - Solution
- Base bounds registers
66Base Bounds Registers
Bounds Register
Base Register
Memory
Base Address
Logical Address LA
Base Address BA
CPU Address
MABA
lt
Memory Address MA
Physical Address PA
Limit Address
Fault
67Contiguous Allocation and Variable Partitions
Bit Maps versus Linked Lists
- Part of memory with 5 processes, 3 holes
- tick marks show allocation units
- shaded regions are free
- Corresponding bit map
68More on Memory Management with Linked Lists
- Four neighbor combinations for the terminating
process X
69Contiguous Variable Partition Allocation schemes
- Bitmap and link list
- Which one occupies more space?
- Depending on the individual memory allocation
scenario. In most cases, bitmap usually occupies
more space. - Which one is faster reclaim freed space?
- On average, bitmap is faster because it just
needs to set the corresponding bits - Which one is faster to find a free hole?
- On average, a link list is faster because we can
link all free holes together
70Storage Placement Strategies
- Best fit
- Use the hole whose size is equal to the need, or
if none is equal, the whole that is larger but
closest in size. - Rationale?
- First fit
- Use the first available hole whose size is
sufficient to meet the need - Rationale?
- Worst fit
- Use the largest available hole
- Rationale?
71Storage Placement Strategies
- Every placement strategy has its own problem
- Best fit
- Creates small holes that cant be used
- Worst Fit
- Gets rid of large holes making it difficult to
run large programs - First Fit
- Creates average size holes
72Locality of Reference
- Most memory references confined to small region
- Well-written program in small loop, procedure or
function - Data likely in array and variables stored
together - Working set
- Number of pages sufficient to run program
normally, i.e., satisfy locality of a particular
program
73Page Replacement Algorithms
- Page fault - page is not in memory and must be
loaded from disk - Algorithms to manage swapping
- First-In, First-Out FIFO Beladys Anomaly
- Least Recently Used LRU
- Least Frequently Used LFU
- Not Used Recently NUR
- Referenced bit, Modified (dirty) bit
- Second Chance Replacement algorithms
- Thrashing
- too many page faults affect system performance
74Virtual Memory Tradeoffs
- Disadvantages
- SWAP file takes up space on disk
- Paging takes up resources of the CPU
- Advantages
- Programs share memory space
- More programs run at the same time
- Programs run even if they cannot fit into memory
all at once - Process separation
75Virtual Memory vs. Caching
- Cache speeds up memory access
- Virtual memory increases amount of perceived
storage - Independence from the configuration and capacity
of the memory system - Low cost per bit compared to main memory
76How Bad Is Fragmentation?
- Statistical arguments - Random sizes
- First-fit
- Given N allocated blocks
- 0.5?N blocks will be lost because of
fragmentation - Known as 50 RULE
77Solve Fragmentation w. Compaction
Free
Monitor
Job 3
Job 5
Job 6
Job 7
Job 8
5
Free
Monitor
Job 3
Job 5
Job 6
Job 7
Job 8
6
Free
Monitor
Job 3
Job 5
Job 6
Job 7
Job 8
7
Free
Monitor
Job 3
Job 5
Job 6
Job 7
Job 8
8
Free
Monitor
Job 3
Job 5
Job 6
Job 7
Job 8
9
78Storage Management Problems
- Fixed partitions suffer from
- internal fragmentation
- Variable partitions suffer from
- external fragmentation
- Compaction suffers from
- overhead
79Question
- What if there are more processes than what could
fit into the memory?
80Swapping
Disk
Monitor
User Partition
81Swapping
Disk
Monitor
User 1
User Partition
82Swapping
Disk
Monitor
User 1
User Partition
User 1
83Swapping
Disk
Monitor
User 1
User Partition
User 1
User 2
84Swapping
Disk
Monitor
User 1
User Partition
User 2
User 2
85Swapping
Disk
Monitor
User 1
User Partition
User 2
User 2
86Swapping
Disk
Monitor
User 1
User Partition
User 1
User 2
87Paging Request
88Paging
89Paging
90Paging
91Page Mapping Hardware
Virtual Memory
Virtual Address (P,D)
P
Page Table
D
P
P?F
Physical Memory
F
Physical Address (F,D)
D
92Page Mapping Hardware
Virtual Memory
Virtual Address (004006)
Page Table
004
006
4
4?5
Physical Memory
005
Physical Address (F,D)
Page size 1000 Number of Possible Virtual Pages
1000 Number of Page Frames 8
006
93Page Fault
- Access a virtual page that is not mapped into any
physical page - A fault is triggered by hardware
- Page fault handler (in OSs VM subsystem)
- Find if there is any free physical page available
- If no, evict some resident page to disk (swapping
space) - Allocate a free physical page
- Load the faulted virtual page to the prepared
physical page - Modify the page table
94Paging Issues
- Page size is 2n
- usually 512 bytes, 1 KB, 2 KB, 4 KB, or 8 KB
- E.g. 32 bit VM address may have 220 (1 MB) pages
with 4k (212) bytes per page - Page table
- 220 page entries take 222 bytes (4 MB)
- page frames must map into real memory
- Page Table base register must be changed for
context switch - No external fragmentation internal fragmentation
on last page only
95Virtual-to-Physical Lookups
- Programs only know virtual addresses
- The page table can be extremely large
- Each virtual address must be translated
- May involve walking hierarchical page table
- Page table stored in memory
- So, each program memory access requires several
actual memory accesses - Solution cache active part of page table
- TLB is an associative memory
96Translation Lookaside Buffer (TLB)
Virtual address
Miss
. . .
TLB
Hit
Physical address
97TLB Function
- If a virtual address is presented to MMU, the
hardware checks TLB by comparing all entries
simultaneously (in parallel). - If match is valid, the page is taken from TLB
without going through page table. - If match is not valid
- MMU detects miss and does a page table lookup.
- It then evicts one page out of TLB and replaces
it with the new entry, so that next time that
page is found in TLB.
98Page Mapping Hardware
Virtual Memory Address (P,D)
Page Table
Associative Lookup
P
P?F
Physical Address (F,D)
99Effective Access Time
- TLB lookup time s time unit
- Memory cycle m µs
- TLB Hit ratio h
- Effective access time
- Eat (m s) h (2m s)(1 h)
- Eat 2m s m h
100Paging
- Provide user with virtual memory that is as big
as user needs - Store virtual memory on disk
- Cache parts of virtual memory being used in real
memory - Load and store cached virtual memory without user
program intervention
101Principle of Locality
- Program and data references within a process tend
to cluster - Only a few pieces of a process will be needed
over a short period of time - Possible to make intelligent guesses about which
pieces will be needed in the future - This suggests that virtual memory may work
efficiently
102Support Needed forVirtual Memory
- Hardware must support paging and segmentation
- Operating system must be able to management the
movement of pages and/or segments between
secondary memory and main memory
103Paging
- Each process has its own page table
- Each page table entry contains the frame number
of the corresponding page in main memory - A bit is needed to indicate whether the page is
in main memory or not
104Fetch Policy
- Fetch Policy
- Determines when a page should be brought into
memory - Demand paging only brings pages into main memory
when a reference is made to a location on the
page - Many page faults when process first started
- Prepaging brings in more pages than needed
- More efficient to bring in pages that reside
contiguously on the disk
105Placement Policy
- Determines where in real memory a process piece
is to reside - Important in a segmentation system
- Paging or combined paging with segmentation
hardware performs address translation
106Replacement Policy
- Placement Policy
- Which page is replaced?
- Page removed should be the page least likely to
be referenced in the near future - Most policies predict the future behavior on the
basis of past behavior
107Replacement Policy
- Frame Locking
- If frame is locked, it may not be replaced
- Kernel of the operating system
- Control structures
- I/O buffers
- Associate a lock bit with each frame
108Basic Replacement Algorithms
- Optimal policy
- Selects for replacement that page for which the
time to the next reference is the longest - Impossible to have perfect knowledge of future
events
109Basic Replacement Algorithms
- Least Recently Used (LRU)
- Replaces the page that has not been referenced
for the longest time - By the principle of locality, this should be the
page least likely to be referenced in the near
future - Each page could be tagged with the time of last
reference. This would require a great deal of
overhead.
110Basic Replacement Algorithms
- First-in, first-out (FIFO)
- Treats page frames allocated to a process as a
circular buffer - Pages are removed in round-robin style
- Simplest replacement policy to implement
- Page that has been in memory the longest is
replaced - These pages may be needed again very soon
111Basic Replacement Algorithms
- Clock Policy
- Additional bit called a use bit
- When a page is first loaded in memory, the use
bit is set to 1 - When the page is referenced, the use bit is set
to 1 - When it is time to replace a page, the first
frame encountered with the use bit set to 0 is
replaced. - During the search for replacement, each use bit
set to 1 is changed to 0
112(No Transcript)