Title: Review Material for Final Exam
1Review Material for Final Exam
- CS 423, Fall 2007
- Klara Nahrstedt/Sam King
2Administrative
- Postings
- HW2 Solution will be posted on Thursday, December
6, evening (Students can use their extra days
after December 3 deadline) - Past Final Exam(s) from previous year(s) will be
posted on Thursday, December 6 - MP4 due Friday, if you use your extra days,
deadline is on Monday, December 10
3Final Exam
- Date Tuesday, December 11
- Time 8-11am
- Room 1111 SC for Students with last name A-K
(approx. 16 students in this room) - Room 1304 SC for Students with last name L-X
(approx. 16 students in this room) - On-campus online students should also come to
1111 SC or 1304 SC
4Final Exam Rules
- Few Final Exam Rules
- No calculator or any other electronic devices
- All math can be done by hand simple math
- Seating in each exam room
- One space between each pair of students
- Closed book, closed notebook exam
- Bring your Student ID with you !!!!
- PROCTORS WILL CHECK YOUR ID AS YOU HAND IN YOUR
EXAM - The exam is individual work.
- You should write only into the exam booklet.
5How to Study for the Final Exam
- Review Slides/Questions for Midterm/Midterm
Review - Review Class Notes and Textbooks chapters
- Review Additional Material on the web site
- Review Regular Homework Problems (HW1 and HW2)
- Work on Relevant Problems after each chapter in
Tanenbaum
6Reading List
- Everything in Lecture Notes !!!
- Tannenbaum Textbook
- Section 2 Threads 2.1-2.4
- Section 4 Memory Management 4.1-4.7
- Section 5 I/O
- Devices 5.1-5.3
- Disk 5.4
- Clocks - 5.5
- Power Management 5.9
- Section 6 File Systems 6.1-6.3
- Section 7 Multimedia OS 7.1-7.8
- Section 8 Multiple Processor Systems
- Multi-processor systems 8.1,
- Remote Procedure Call and Distributed Shared
Memory 8.2.4-8.2.5, - Distributed File Systems 8.3.4
- Section 9 Security 9.1-9.7
- Section 10 UNIX/Linux
- File Systems 10.6
7Reading List
- Understanding Linux Kernel 2.6 Textbook
- Linux Device Driver Model (Chapter 13)
- Papers
- MapReduce Simplified Data Processing on Large
Cluster, J. Dean, S. Ghemawat, OSDI 2004 - Google File System ACM SOSP 2003
8Threads(Section 2.1-2.4)
- Four Conditions for sharing correctly data
- Mutual exclusion
- Bounded waiting
- Progress
- Any number of CPUs and any speed of CPUs
- To show the correctness of any of thread
interleaving, one must ask if any of the
conditions is violated
9Threads
- Petersons Solution
- Lock Variables
- TSL (Test-and-Set)
- Semaphores
- Producer/Consumer Problem
- Mutex
- Reader/Writer Locks
- Monitors
- Conditional Variables
10Producer-consumer w/ semaphores
- mutex ensures mutual exclusion
- fullBuffers counts the number of full buffers
(initialized to 0) - emptyBuffers counts the number of empty buffers
(initialized to N)
consumer While(TRUE) down(emptyBuffers)
down(mutex) item remove_item()
up(mutex) up(emptyBuffers)
producer While(TRUE) item
produce_item() down(fullBuffers)
down(mutex) insert_item(item)
up(mutex) up(fullBuffers)
Is this solution correct?
11Producer/Consumer
- Why do we need different semaphores for
fullBuffers and emptyBuffers? - Does the order of down() calls matter in the
consumer? - Does the order of the up() call matter in the
consumer? - How would you rewrite this problem using monitor
and conditional variable?
12Reasoning about Correctness
- / The following code has a bug, please list a
thread interleaving that illustrates the problem.
/ - enqueue()
- lock(queueLock)
- / find tail of the queue /
- for(ptr head ptr-gtnext ! NULL ptr
ptr-gtnext) - unlock(queueLock)
- lock(queueLock)
- ptr-gtnext new_element
- new_element-gtnext NULL
- unlock(queueLock)
- dequeue()
- lock(queueLock)
- element NULL
- if(head-gtnext ! NULL)
- element head-gtnext
- head-gtnext head-gtnext-gtnext
-
- unlock(queueLock)
- return element
-
13Memory Management (4.1-4.7)
- Basic Memory Management
- Mono-programming without Swapping or Paging
- Multiprogramming with Fixed Partitions
- Swapping
- Variable Partitions
- Virtual Memory Management
- Paging
- Page Table
- Multi-level Page Tables
- TLB Translation Lookaside Buffers
- Page Replacement Algorithms
- Optimal
- FIFO
- Second Chance
- LRU
- Clock Page Replacement
- Working Set
14Page Fault Rate Curve
- As page frames per VM space decrease, the page
fault rate increases.
15Thrashing
- Computations have locality.
- As page frames decrease, the page frames
available are not large enough to contain the
locality of the process. - The processes start faulting heavily.
- Pages that are read in, are used and immediately
paged out.
16Thrashing and CPU Utilization
- As the page rate goes up, processes get suspended
on page out queues for the disk. - the system may try to optimize performance by
starting new jobs. - starting new jobs will reduce the number of page
frames available to each process, increasing the
page fault requests. - system throughput plunges.
17Working Set
- the working set model assumes locality.
- the principle of locality states that a program
clusters its access to data and text temporally. - As the number of page frames increases above some
threshold, the page fault rate will drop
dramatically.
18Working Set Example
Window size is ?
12 references, 8 faults
19I/OSection 5.1-5.5 and 5.9
- I/O Devices Controllers
- Memory-Mapped I/O
- DMA
- Interrupt-driven I/O
- Programmed I/O polling busy waiting
- Device drivers
- Disks
- RAID, disk scheduling
- Clocks
- Power management
20I/O Software
- Layers of the I/O system and the main
functions of each layer
21Questions
- The disk scheduling algorithm that may cause
starvation is - FCFS or SSTF or C-SCAN or LOOK ??
- From the list of disk-scheduling algorithms
(FCFS, SSTF, SCAN, C-SCAN, LOOK, C-LOOK), SSTF
will always give the least head movement for any
set of cylinder-number requests to the disk
scheduler - True or False ??
- The cylinder numbers on a disk are 0,1,10.
Currently, there are five cylinder requests on
the disk scheduler queue in the following order
1,5,4,8,7 and the head is located at position 2
and moving in the direction of increasing block
numbers. The time to serve a request is
proportional to the distance from the head to the
cylinder number requested. If T(X) is the time it
takes to service the requests currently in the
queue using scheduling algorithm X, then - T(SSTF) lt T(SCAN) lt T(FCFS) or
- T(FCFS) lt T(SSTF) lt T(SCAN) or
- T(SSTF) lt T(FCFS) lt T(SCAN) or
- None of the above???
22File Systems(Section 6.1-6.3)
- File Access
- File Open Operation
- File System Layout
- Contiguous vs Linked List vs FAT vs Indexed File
Allocation - Indexed allocation i-node allocation
- Disk Space Management
23Questions
- A UNIX i-node has 10 disk addresses for data
blocks, as well as the addresses for single,
double, and triple indirect blocks. If each of
these holds 256 disk addresses, what is the size
of the largest file that can be handled, assuming
that a disk block is 1KB? - 10256511766
- 1025651165,536
- 1025665,53616,777,216
- None of the above
- In a UNIX file-system the block size has been set
to 4K. Given that the i-node blocks are already
allocated on disk how many free blocks need to be
found to store a file of size 64K? - 16, or 17, or 64 or 65
24Multimedia (Section 7.1-7.7)
- Audio Encoding
- Video Encoding
- Compression
- Examples of Lossless Coding (RLC, Huffman)
- Examples of Lossy Coding (JPEG, MPEG)
- EDF vs. Rate Monotonic Scheduling
- File placement
- Single disk
- Small block organization, large block
organization - Zipf Distribution
- Disk Scheduling
25Question
- Let us assume 4 periodic processes
- A with P(A) 100ms, E(A) 10ms
- B with P(B) 100ms, E(B) 20ms
- C with P(C) 500ms, E(C ) 100ms
- D with P(D) 250ms, E(D) 10ms
- Question Is this set of processes schedulable
with EDF or RM? If schedulable with any of the
scheduling policies, what is the schedule?
26Multi-Computer Systems(Section 8.1, 8.2.4-5,
8.3.4)
- UMA Bus-Based SMP Architecture
- UMA Multiprocessor Using Crossbar Switches
- Multi-processor Os Types
- Multi-processor scheduling
- RPC
- Distributed Shared Memory
- DFS transfer model, naming transparency, file
sharing, AFS, NFS, Google File System
27Question
- Design your own distributed file system that
would satisfy the following assumptions (you may
use any design options from NSF, AFS, Google) - Clients must be separate from servers
- Protocols cache at the clients only parts of file
(few file blocks) - Naming scheme must be location transparent
- Servers are stateful
- Specify the schematic view of your DFS
architecture and explain each function in each
layer
28Question
- Explain your DFS protocol for open, read, write
and close operations - Explain your DFS protocol to enforce consistency
on write operation - Explain your DFS protocol to handle clients
failure
29Good Luck