Title: CS 2200 Lecture 14 Storage
1CS 2200 Lecture 14Storage
- (Lectures based on the work of Jay Brockman,
Sharon Hu, Randy Katz, Peter Kogge, Bill Leahy,
Ken MacKenzie, Richard Murphy, and Michael
Niemier)
2Storage Systems
- I/O performance (bandwidth, latency)
- Bandwidth improving, but not as fast as CPU
- Latency improving very slowly
- Consequently, by Amdahls Lawfraction of time
spent on I/O increasing - Other factors just as important
- Reliability, Availability, Dependability
- Storage devices very diverse
- Magnetic disks, tapes, CDs, DVDs, flash
- Different advantages/disadvantages and uses
3The Full Memory Hierarchyalways reuse a good
idea
Capacity Access Time Cost
Upper Level
Staging Xfer Unit
faster
CPU Registers 100s Bytes lt10s ns
Registers
prog./compiler 1-8 bytes
Instr. Operands
Cache K Bytes 10-100 ns 1-0.1 cents/bit
Cache
cache cntl 8-128 bytes
Blocks
Main Memory M Bytes 200ns- 500ns .0001-.00001
cents /bit
Memory
OS 4K-16K bytes
Pages
Disk G Bytes, 10 ms (10,000,000 ns) 10 - 10
cents/bit
Disk
-5
-6
user/operator Mbytes
Files
Larger
Tape infinite sec-min 10
Tape
Lower Level
-8
4Magnetic Disks
5Magnetic Disks
- Good cheap (/MB), fairly reliable
- Primary storage, memory swapping
- Bad Can only read/write an entire sector
- Can not be directly addressed as main memory
- Disk access time
- Queuing delay
- Wait until disk gets to do this operation
- Seek time
- Head moves to correct track
- Rotational latency
- Correct sector must get under the head
- Data transfer time and controller time
6Example average disk access time
- What is the average time to read or write a
512-byte sector for a typical disk? - The average seek time is given to be 9 ms
- The transfer rate is 4 MB per second
- The disk rotates at 7200 RPM
- The controller overhead is 1 ms
- The disk is currently idle before any requests
are made (so there is no queuing delay) - Average disk access time average seek time
average rotational delay transfer time
controller overhead
7Trends for Magnetic Disks
- Capacity doubles in approx. one year
- Average seek time
- 5-12ms, very slow improvement
- Average rotational latency (1/2 full rotation)
- 5,000 RPM to 10,000 RPM to 15,000 RPM
- Improves slowly, not easy (reliability, noise)
- Data transfer rate
- Improves at an OK rate
- New interfaces, more data per track
8Optical Disks
- Improvement limited by standards
- CD and DVD capacity fixed over years
- Technology actually improves, but it takes
timefor it to make it into new standards - Physically small, Replaceable
- Good for backups and carrying around
9Magnetic Tapes
- Very long access latency
- Must rewind tape to correct place for read/write
- Used to be very cheap (/MB)
- Its just miles of tape!
- But disks have caught up anyway
- Used for backup (secondary storage)
- Large capacity Replaceable
10Using RAM for Storage
- Disks are about 100 times cheaper (/MB)
- DRAM is about 100,000 faster (latency)
- Solid-State Disks
- Actually, a DRAM and a battery
- Much faster than disk, more reliable
- Expensive (not very good for archives and such)
- Flash memory
- Much faster than disks, but slower than DRAM
- Very low power consumption
- Can be sold in small sizes (few MB, but tiny)
11Busses for I/O
- Traditionally, two kinds of busses
- CPU-Memory bus (fast, short)
- I/O bus (can be slower and longer)
- Now mezanine busses (PCI)
- Pretty fast and relatively short
- Can connect fast devices directly
- Can connect to longer, slower I/O busses
- Data transfers over a bus transactions
12Buses in a System
13Multiple Busses
Cache Bus e.g. 256b, 533MHz
Memory Bus e.g. 64b, 533MHz
Processor
interrupts
Cache
I/O Bus e.g. 64b, 66MHz
Memory Bus
bridge
Main Memory
I/O Bus (e.g. PCI)
I/O Controller
I/O Controller
I/O Controller
Disk Drive Bus e.g. SCSI 16b, 20MHz
Graphics
Disk
Disk
Network
14Bus Design Decisions
- Split transactions
- Traditionally, bus stays occupiedbetween request
and response on a read - Now, get bus, send request, free bus(when
response ready, get bus, send response, free us) - Bus mastering
- Which devices can initiate transfers on the bus
- CPU can always be the master
- But we can also allow other devices to be masters
- With multiple masters, need arbitration
15CPU-Device Interface
- Devices typically accessible to CPUthrough
control and data registers - These registers can be either
- Memory mapped
- Some physical memory addressesactually map to
I/O device registers - Read/write through LS/ST
- Most RISC processors support only this kind of
I/O mapping - Be in a separate I/O address space
- Read/write through special IN/OUT instrs
- Used in x86, but even in x86 PCs some I/O is
memory mapped
16CPU-Device Interface
- Devices can be very slow
- When given some data, a device may take a long
time to become ready to receive more - Usually we have a Done bit in status register
- Checking the Done bit
- Polling test the Done bit in a loop
- Interrupt interrupt CPU when Done bit becomes 1
- Interrupts if I/O events infrequent or if device
is slow - Each interrupt has some OS and HW overhead
- Polling better for devices that are done quickly
- Even then, buffering data in the device lets us
use interrupts - Interrupt-driven I/O used today in most systems
17Arbitration Daisy Chain
Simple but not fair and slow.
18Arbitration
- Centralized Parallel Arbitration
- Requires central arbiter
- Each device has separate line
- Central arbiter may become bottleneck
- Used in PCI bus
- Distributed Arbitration by Self Selection
- Each device sees all requestors
- Priority scheme allows each to know if they get
bus - Requires lots of request lines
- Used by Apple NuBus (backplane)
19Arbitration
- Distributed Arbitration by Collision Detection
- Devices independently request bus
- Devices have ability to detect simultaneous
requests or Collisions. - Upon collision a variety of schemes are used to
select among requestors - Used by Ethernet
20Dependability
- Quality of delivered service that justifies us
relying on the system to provide that service - Delivered service is the actual behavior
- Each module has an ideal specified behavior
- Faults, Errors, Failures
- Failure actual deviates from specified behavior
- Error defect that results in failure
- Fault cause of error
21Failure Example
- A programming mistake is a fault
- An add function that works fine, except when we
try 53, in which case it returns 7 instead of 8 - It is a latent error until activated
- An activated fault becomes effective error
- We call our add and it returns 7 for 53
- Failure when error results in deviation in
behavior - E.g. we schedule a meeting for the 7th instead of
8th - An effective error need not result in a
failure(if we never use the result of this add,
no failure)
22Reliability and Availability
- System can be in one of two states
- Service Accomplishment
- Service Interruption
- Reliability
- Measure of continuous service accomplishment
- Typically, Mean Time To Failure (MTTF)
- Availability
- Service accomplishment as a fraction of overall
time - Also looks at Mean Time To Repair (MTTR)
- MTTR is the average duration of service
interruption - AvailabilityMTTF/(MTTFMTTR)
23Faults Classified by Cause
- Hardware Faults
- Hardware devices fail to perform as designed
- Design Faults
- Faults in software and some faults in HW
- E.g. the Pentium FDIV bug was a design fault
- Operation Faults
- Operator and user mistakes
- Environmental Faults
- Fire, power failure, sabotage, etc.
24Faults Classified by Duration
- Transient Faults
- Last for a limited time and are not recurring
- An alpha particle can flip a bit in memorybut
usually does not damage the memory HW - Intermittent Faults
- Last for a limited time but are recurring
- E.g. overclocked system works fine for a while,
but then crashes then we reboot it and it does
it again - Permanent Faults
- Do not get corrected when time passes
- E.g. the processor has a large round hole init
because we wanted to see whats inside
25Improving Reliability
- Fault Avoidance
- Prevent occurrence of faults by construction
- Fault Tolerance
- Prevent faults from becoming failures
- Typically done through redundancy
- Error Removal
- Removing latent errors by verification
- Error Forecasting
- Estimate presence, creation, and consequences of
errors
26Disk Fault Tolerance with RAID
- Redundant Array of Inexpensive Disks
- Several smaller disks play a role of one big disk
- Can improve performance
- Data spread among multiple disks
- Accesses to different disks go in parallel
- Can improve reliability
- Data can be kept with some redundancy
27RAID 0
- Striping used to improve performance
- Data stored on disks in array so that consecutive
stripes of data are stored on different disks - Makes disks share the load, improving
- Throughput all disks can work in parallel
- Latency less queuing delay a queue for each
disk - No Redundancy
- Reliability actually lower than with single
disk(if any disk in array fails, we have a
problem)
28RAID 1
- Disk mirroring
- Disks paired up, keep identical data
- A write must update copies on both disks
- A read can read any of the two copies
- Improved performance and reliability
- Can do more reads per unit time
- If one disk fails, its mirror still has the data
- If we have more than 2 disks (e.g. 8 disks)
- Striped mirrors (RAID 10)
- Pair disks for mirroring, striping across the 4
pairs - Mirrored stripes (RAID 01)
- Do striping using 4 disks, then mirror that using
the other 4
29RAID 4
- Block-interleaved parity
- One disk is a parity disk, keeps parity blocks
- Parity block at position X is the parity for all
blocks whose position is X on any of the data
disks - A read accesses only the data disk where the data
is - A write must update the data block and its parity
block - Can recover from an error on any one disk
- Use parity and other data disks to restore lost
data - Note that with N disks we have N-1 data disks and
only one parity disk, but can still recover when
one disk fails - But write performance worse than with one
disk(all writes must read and then write the
parity disk)
30RAID 4 Parity Update
31RAID 5
- Distributed block-interleaved parity
- Like RAID 4, but parity blocks distributed to all
disks - Read accesses only the data disk where the data
is - A write must update the data block and its parity
block - But now all disks share the parity update load
-
-
32RAID 6
- Two different (P and Q) check blocks
- Each protection group has
- N-2 data blocks
- One parity block
- Another check block (not the same as parity)
- Can recover when two disks are lost
- Think of P as the sum and Q as the product of D
blocks - If two blocks are missing, solve equations to get
both back - More space overhead (only N-2 of N are data)
- More write overhead (must update both P and Q)
- P and Q still distributed like in RAID 5