Title: ICS 212 Winter 2002 Prof. R. Gupta Communication Channels
1ICS 212 Winter 2002 Prof. R. GuptaCommunication
Channels
- Rajesh Gupta
- Information and Computer Science
- University of California, Irvine.
2Outline
- Communication Channel Issues
- Signaling and Synchronization
- Protocols
3Communication Channels
- Basic parameters
- Simplex versus duplex
- bandwidth and Signal/Noise Ratio
- Synchronous versus asynchronous
- Basic impact on performance
- Basic resources
- channel capacity
- synchronization losses
4Digital Signaling
- Can be Baseband or Modulated
- Baseband Signaling
- Voltage references
- ground current sinking
- noise rejection (crosstalk, simultaneous
switching, etc.) - often differential signaling is used
- avoid ground biasing issues
- pairs in similar environment used to reject
common mode noise - Bandwidth limitations and digital pulses
- channel bandwidth available
- directly related to pulse shaping
- Shannons channel capacity limit
- tradeoff between BW and SNR
- can be improved over source coding
5Channel Capacity
- Is the maximum rate at which data can be
transmitted over a given channel - There are four concepts that are interrelated
- data rate (in bps)
- bandwidth of the transmitted signal (in Hz)
- noise (average level of noise over the
channel/path) - error rate
- Problem how to maximize data rate for a given
bw, noise and tolerable error rate? - First consider a channel that is noise free
- limitation on data rate is the available bw for
the signal - Nyquist if signal transmission rate is 2B, then
a signal with frequencies no greater than B is
sufficient to carry the signal rate or given a
BW of B, the highest signal rate that can be
carried is 2B - (limited by intersymbol interference, ISI)
- For binary signals (with two voltage levels)
- B Hz supports 2B bps. For multiple M levels, C
2B log_2 M - doubling BW doubles data rate
6Channel Capacity
- Let us now consider the effect of noise
- noise causes signal degradation, bit error
- for a signal signal noise, greater signal
strength improves the data reception (decreases
BER) - SNR measured at the RX (in dB) 10 log_10
signal/noise - SNR sets an upper bound on the achievable data
rate - Shannon Maximum channel capacity in bps
- C B log_2 (1 SNR)
- This is error-free capacity assumes mostly
thermal noise, impulse noise makes it worse, so
do distortion fading etc. - it is achievable through suitable coding.
- E_b/N_0 Signal energy per bit / noise power
density per Hz - E_b S T_b signal power time per bit data
rate, R 1/T_b - E_b/N_0 (S/R)/N_0 S/kTR
- In dB E_b/N_0 S - 10 log R - 228.6 dbW - 10
log T - as R bit rate R increases, transmitted power must
increase to maintain Eb/N0
7Example
- Assume Spectrum from 3 MHz to 4 MHz and SNR of
24 dB - B 4 -3 1 MHz
- SNR 24 dB 102.4 251
- Error free capacity, C 106 log_2 (1251) 8
Mbps - Now based on Nyquist, how many signaling levels
are required? - C 2 B Log_2 M
- log_2 M 4
- M 16
8Example
- For the constant signal and noise strength, an
increase in data rate increases the error rate. - For Binary PSK, Eb/N08.4 dB is needed for a BER
of 10(-4). - At room temperature (290 deg K), for a data rate
of 2400 bps which is the received signal level
required? - 8.4 S - 10 log 2400 228.6 - 10 log 290
- or S -161.8 dBW
9Design tasks for a bus designer
- Goal maximize communication performance
- Design flexibility
- choose a good signaling scheme
- hardware and electrical issues TTL, CMOS, ECL,
GTL etc. - maximize capacity
- estimate error rate and optimize speed
- manage channel usage efficiently
- maximize usable capacity by minimizing overheads
- use opportunities to send useful data even during
gaps
10Example of Signaling
Data
Clock
- Differential signals, twisted pairs
- Parallel data paths
- Asynchronous signaling
- Limitations
- switching rate
- noise, crosstalk, synchronization (rate, phase)
- ohmic losses
11Encoding Data
- Basic encoding transition rate 2X data rate
- each clock has two transition for a single data
value - Many coding schemes reduce the number of
transitions to be equal to that of the data rate,
for example - NRZ (duration of each bit 1/f)
- NRZI (signal is toggled only when a logic 1 is to
be transmitted) - 4B/5B encoding translated 4 bits of data into a
5-bit symbol - for the 16 possible permutations there are 32
symbols - carefully choose symbols to guarantee that at
least one signal transition within any 3 bit
period.
124B/5B
Often clock can be recovered from data using VCO
13Channel Management Issues
- Data delivery
- how to get data there?
- Flow control
- how to match rates, ensure no overruns?
- Reliability
- ensure data loss is nil or minimum
- Choices to be made to optimize Performance
14Example A basic delivery protocol (DP)
- Elements
- write and acknowledge
- States initial, write, ack, got_ack, nack, reset
- 4 transitions per unit exchange
- 2 round trip delays
Data valid
Data
Ack
15Let us improve a bit, ODP
- Optimistic Delivery Protocol
- Pipelined writes and acknowledges
- Notion of a global clock rate, data valid
signal - Clear to send (CTS)
- States initial, write, write,, not-CTS, CTS,
write, write, - One transition per unit exchange
- Roundtrip delay only on not CTS
16Pipelined Delivery Protocol
- Send and Acks are now decoupled
- Send until used surplus of acks
- Needs buffering
- Block until ack is received
- SDLC, HDLC-like
17Comparing Performance
- Let
- L Channel delay, R signaling rate, B
available buffering - Assuming destination is always ready
- DP
- 1/4L throughput
- ODP
- B gt 2 LR gt can drive channel at full rate, R
- B lt 2 LR gt need to restrain sender to assure no
data loss - throughput R/(B/2LR)
- PDP same as ODP
18Other Issues
- Synchronization
- Parity, error checking
- Source-level, channel encoding
- These usually increase latency and buffering
requirements - Cause overheads (comparable to time of flight).
19Link Level Buffering
- Buffering to improve channel throughput
- May be costly if the objective is to keep the
channel occupied - Else by
- rate matching
- data loss
- higher level buffer (as in long haul links)
- Higher level buffering / routing issues
- basic router/switch structure
- link buffers and central packet memory
- traditional store and forward routing
- Cut-through
- Wormhole
20Routing
- Store and Forward
- Latency Packet size distance
- Buffer requirements are some multiple of packet
size - Deadlock concerns and buffer allocaiton
- Cut through
- start sending packet before it is completely
arrived - latency reduce to distancepacket size
- each node guarantees to absorb the entire packet,
so must have large buffers - Worm hole
- fast, used in MPPs, clusters
- modest link delays
- latency reduced to distance packet size
- reduced buffer requirements
- better for short channels (channels shorter than
packets) - allows higher clock rates
21ICS 212 Winter 2002 Prof. R. GuptaBus Design
- Rajesh Gupta
- Information and Computer Science
- University of California, Irvine.
22Outline
- What is a bus?
- Example
- Why buses?
- Typical organization of a bus
- Types of buses
- Example systems
- Synchronous versus asynchronous buses
- Bus transactions and protocols
- Summary of common buses
23What is a bus?
- A bus defines a shared communication link to
connect multiple sub-systems - a tool for composition of systems
- serves as a standard for system composition
- Consists of
- a bunch of wires
- standard defined from mechanical specifications,
to electrical, timing and signaling - signaling and transaction protocol specifications
24Example Embedded Pentium
25Why buses?
- New devices can be added easily
- Defines a standard for composition for
components from diverse sources - Inherently a shared resource efficiently
utilized - Aggregation of communication creates bandwidth
bottlenecks - Diversity of components affects maximum bus speed
- widely varying latencies and data transfer rates
of devices affects overall of performance of a
bus
26Typical Bus Organization
- Wires consists of a set of data and control
lines - data lines carry data and addresses, (complex
commands) - control lines carry signaling info (req, ack),
indicate type of information on the data bus - Control lines are used to implement a bus
transaction - A bus transaction consists of two parts
- request -- issue command
- action -- actual data transfer
- In a master-slave organization
- master starts a bus transaction by issuing
commands - slave responds by sending/receiving data to/from
master.
27Types of Buses
- Processor-Memory Bus
- high speed and ultra short (MB-GB/sec, mm length)
- designed to match the memory system structure for
maximum throughput between CPU and memory - often proprietary and processor specific
- IO Bus
- lengthier (cm, MB/sec) and slower
- match performance needs of a wide range of IO
devices - connects to Processor-Memory or Other Buses
through bridges - Backplane bus
- connects processor, memory and IO devices
- usually provides a connection standard within a
chassis - Often slower and proprietary.
28Single Bus System
- Single bus used for both processor-memory and
IO-memory communications - e.g., IBM PC-AT
- Simple and low cost
- Slow and can become a major bottleneck.
29Two-Bus System
Bridge
IO
IO bus
Memory
P-Memory bus
CPU
IO bus
Bridge
IO
- Reduces load on Processor-Memory bus
- Example
- Apple Mac-II
- NuBus for processor, memory and selected IO
devices - SCCI bus for the rest of IO devices
- Similarly three-bus systems use a backplane bus
to bridge to CPU-memory bus. The IO buses tap
into the backplane bus.
30Synchronous versus Asynchronous Buses
- Synchronous Bus
- explicit clock in the control wires
- protocol defined with respect to the clock
- Advantages
- relatively little interface logic
- typically fast
- Disadvantages
- all devices tied to the same clock line
- bus length limited to control clock skew.
- Asynchronous Bus
- no explicit clock
- requires handshaking protocol instead
- can accommodate a wide range of devices.
- So what are bus transactions and protocols?
31Bus Protocols
- A communication protocol is a specified sequence
of events (transitions) within given timing
requirements that are needed to successfully
transfer information over a bus. - Protocols are inherently tied to the nature of
the bus - synchronous or asynchronous
- Bus transactions are of three types
- 1 Arbitration
- defines access to the Bus (a shared resource)
- 2 Request
- 3 Action
- request-action pair defines the data transfer
mechanism.
32Bus Arbitration
- Defines how a bus is reserved for use for a given
device - Simplest system A Master-Slave arrangement
- only one bus master that initiates and controls
all bus requests - a slave (device) responds to specific read, write
requests - example
- processor as the only bus master
- all bus requests are handled by the processor,
- Multiple bus masters bus requests are preceded
by a bus arbitration scheme - a bus usage is prioritized among multiple masters
- each of which asserts bus request but can not use
it until the request is granted - fairness of use, I.e., every requesting device is
granted bus use
33Bus Arbitration Schemes
- Daisy Chain Arbitration
- devices are connected in a daisy-chain with
highest priority device closest to the bus
arbiter - simple but can not ensure fairness since a lower
priority device may be locked out indefinitely - daisy chain granting limits bus speed.
- Centralized Parallel Arbitration
- each device connected directly to the arbiter
using grant and request lines - commonly used in processor-memory and high-speed
IO buses - easy protocol implementation in case of
synchronous buses - all devices operate synchronously and source/sink
data at the same rate.
grant
grant
grant
grant
Arbiter
Wired-OR request and release lines
34Bus Arbitration Schemes (contd.)
- Distributed Arbitration By Self-Detection
- each device requesting a bus identifies itself by
placing a code on the bus - Distributed Arbitration By Collision-Detection
- each device writes and reads its code
- detection collision if read differs from written
code - use a back-off strategy in case of collisions.
35A Synchronous Protocol
Bus Request
Bus Grant
Control/Addr
Data3
Data2
Data1
Data
- Typically the slave device (e.g., memory) may
take time to respond and may operate at a
different data rate than the bus clock.
36Synchronous Protocol (contd.)
Bus Request
Bus Grant
Control/Addr
WAIT
Data3
Data2
Data1
Data
- The device uses a wait control signal to indicate
when it is ready for data transfer.
37Increasing Bus Throughput
- More wires
- de-multiplex address and data lines
- increase width of data lines to transfer multiple
words in fewer clock cycles - typically, L1-CPU buses are very wide.
- Perform block transfers
- bus transfers multiple words in back-to-back
cycles - only one (starting) address sent
- bus is not released until block transfer is
complete - increased complexity of the bus request/grant
protocols - decreased response time to bus requests.
38Increasing Throughput (contd.)
- On multi-master buses
- overlap arbitration for the next transaction
during current transaction - a master can hold onto a bus for multiple
transactions (bus parking) as long as no other
master makes a request - split-phase or packet switched bus
- complete separate (decoupled) address and data
phases - use tags to match address and data transfers
39Multi-master Memory Buses
40IO Buses
- Used for high-speed graphics, interface to fast
networks - Designed to support high-speed DMA transfers
- data transfer bursts at full rate
- Limited number of devices to ensure high
throughput. - Peripheral Component Interconnect or PCI Bus
(synchronous bus) - centralized parallel arbitration (overlapped)
- transfers in bursts (of unlimited length)
- Protocol (simplified)
- address phase starts by asserting FRAME
- next cycle initiator asserts command and
address - data transfers when both IRDY (asserted by
master) and TRDY (asserted by target) happen - FRAME removed when master intends to complete
only one more data transfer. - Bus parking, delayed (pending and split-phase)
transactions.
41IO and Backplane Buses