Title: Equalization Department of Electrical Engineering Wang Jin
1Equalization Department of Electrical
EngineeringWang Jin
2Equalization
Fig. Digital communication system using an
adaptive equaliser at the receiver.
3Equalization
- Equalization compensates for or mitigates
inter-symbol interference (ISI) created by
multipaths in time dispersive channels (frequency
selective fading channels). - Equalizer must be adaptive, since channels are
time varying.
4Zero forcing equalizer
- Design from frequency domain viewpoint.
5Zero forcing equalizer
- ? must compensate for the channel
distortion.
? Inverse channel filter ? completely eliminates
ISI caused by the channel ? Zero Forcing
equaliser, ? ZF.
6Zero forcing equalizer
7Zero forcing equalizer
Fig. Pulses having a raised cosine spectrum
8Zero forcing equalizer
9Zero forcing equalizer
- Example
- A two-path channel with impulse response
- The transfer function is
- The inverse channel filter has the transfer
function
10Zero forcing equalizer
- Since DSP is generally adopted for automatic
equalizers ? it is convenient to use discrete
time (sampled) representation of signal. - Received signal
- For simplicity, assume
say
11Zero forcing equalizer
- Denote a T-time delay element by Z- 1, then
12Zero forcing equalizer
- The transfer function of the inverse channel
filter is - This can be realized by a circuit known as the
linear transversal filter.
13Zero forcing equalizer
14Zero forcing equalizer
- The exact ZF equalizer is of infinite length but
usually implemented by a truncated (finite)
length approximation. - For , a 2-tap version of
the ZF equalizer has coefficients
15Modeling of ISI channels
- Complex envelope of any modulated signal can be
expressed as - where ha(t) is the amplitude shaping pulse.
16Modeling of ISI channels
- In general, ASK, PSK, and QAM are included, but
most FSK waveforms are not. - Received complex envelope is
- where is
channel impulse response. - Maximum likelihood receiver has impulse response
- matched to f(t)
17Modeling of ISI channels
- Output
- where nb(t) is output noise and
18Least Mean Square Equalizers
Fig. A basic equaliser during training
19Least Mean Square Equalizers
- Minimization of the mean square error (MSE), ?
MMSE. - Equalizer input
- h(t) impulse response of tandem combination
of transmit filter, channel and receiver filter. - In the absence of noise and ISI
- The error due to noise and ISI at tkT is given
by - The error is
20Least Mean Square Equalizers
- The MSE is
- In order to minimize , we require
21Least Mean Square Equalizers
22Least Mean Square Equalizers
- The optimum tap coefficients are obtained as W
R-1 P. - But this is solved on the knowledge of xk's,
which are the transmitted pilot data. - A given sequence of xk's called a test signal,
reference signal or training signal is
transmitted prior to the information signal,
(periodically). - By detecting the training sequence, the adaptive
algorithm in the receiver is able to compute and
update the optimum wnks -- until the next
training sequence is sent.
23Least Mean Square Equalizers
- Example
- Determine the tap coefficients of a 2-tap MMSE
for - Now, given that
24Least Mean Square Equalizers
25Mean Square Error (MSE) for optimum weights
26Mean Square Error (MSE) for optimum weights
- Now, the optimum weight vector was obtained as
- Substituting this into the MSE formula above, we
have
27Mean Square Error (MSE) for optimum weights
- Now, apply 3 matrix algebra rules
- For any square matrix
- For any matrix product
- For any square matrix
28Mean Square Error (MSE) for optimum weights
29MSE for zero forcing equalizers
- Recall for ZF equalizer
- Assuming the same channel and noise as for the
MMSE equalizer
for MMSE
30MSE for zero forcing equalizers
- The ZF equalizer is an inverse filter ? it
amplifies noise at frequencies where the channel
transfer function has high attenuation. - The LMS algorithm tends to find optimum tap
coefficients compromising between the effects of
ISI and noise power increase, while the ZF
equalizer design does not take noise into
account.
31Diversity Techniques
- Mitigates fading effects by using multiple
received signals which experienced different
fading conditions. - Space diversity With multiple antennas.
- Polarization diversity Using differently
polarized waves. - Frequency diversity With multiple frequencies.
- Time diversity By transmission of the same
signal in different times. - Angle diversity Using directive antenna aimed at
different directions. - Signal combining methods.
- Maximal Ratio combining.
32Diversity Techniques
- Equal gain combining.
- Selection (switching) combining.
- Space diversity is classified into
micro-diversity and macro-diversity. - Micro-diversity Antennas are spaced closely to
the order of a wavelength. Effective for fast
fading where signal fades in a distance of the
order of a wavelength. - Macro (site) diversity Antennas are spaced wide
enough to cope with the topographical conditions
( eg buildings, roads, terrain). Effective for
shadowing, where signal fades due to the
topographical obstructions.
33PDF of SNR for diversity systems
- Consider an M-branch space diversity system.
- Signal received at each branch has Rayleigh
distribution. - All branch signals are independent of one
another. - Assume the same mean signal and noise power ? the
same mean SNR for all branches. - Instantaneous
34PDF of SNR for diversity systems
- Probability that takes values less than some
threshold x is,
35Selection Diversity
36Selection Diversity
- Branch selection unit selects the branch that has
the largest SNR. - Events in which the selector output SNR, , is
less than some value, x,is exactly the set of
events in which each is simultaneously below
x. - Since independent fading is assumed in each of
the M branches,
37Selection Diversity
38Maximal Ratio Combining
39Maximal Ratio Combining
- is complex envelope
of signal in the k-th branch. - The complex equivalent low-pass signal u(t)
containing the information is common to all
branches. - Assume u(t) normalized to unit mean square
envelope such that
40Maximal Ratio Combining
- Assume time variation of gk (t) is much slower
than that of u(t) . - Let nk(t) be the complex envelope of the additive
Gaussian noise in the k-th receiver (branch). - ? usually all k N
are equal.
41Maximal Ratio Combining
- Now define SNR of k-th branch as
- Now,
- Where are the complex combining weight
factors. - These factors are changed from instant to instant
as the branch signals change over the short term
fading.
42Maximal Ratio Combining
- These factors are changed from instant to instant
as the branch signals change over the short term
fading. - How should be chosen to achieve maximum
combiner output SNR at each instant? - Assuming nk(t)s are mutually independent
(uncorrelated), we have
43Maximal Ratio Combining
- Instantaneous output SNR, ,
44Maximal Ratio Combining
- Apply the Schwarz Inequality for complex valued
numbers. - The equality holds if for all k,
where K is an arbitrary complex constant. - Let
45Maximal Ratio Combining
- with equality holding if and only if
, for each k. - Optimum weight for each branch has magnitude
proportional to the signal magnitude and
inversely proportional to the branch noise power
level, and has a phase, canceling out the signal
(channel ) phase. - This phase alignment allows coherent addition of
branch signals ?co-phasing.
46Maximal Ratio Combining
- each has a chi-square
distribution. - is distributed as chi-square with 2M
degrees of freedom.
- Average SNR, , is simply the sum of the
individual - for each branch, which is G,
47Convolutional CodesDepartment of Electrical
EngineeringWang Jin
48Overview
- Background
- Definition
- Speciality
- An Example
- State Diagram
- Code Trellis
- Transfer Function
- Summary
- Assignment
49Background
- Convolutional code is a kind of code using in
digital communication systems - Using in additive white Gaussian noise channel
- To improve the performance of radio and satellite
communication systems - Include two parts encoding and decoding
50Block codes Vs Convolutional Codes
- Block codes take k input bits and produce n
output bits, where k and n are large - There is no data dependency between blocks
- Useful for data communications
- Convolution codes take a small number of input
bits and produce a small number of output bits
each time period - Data passes through convolutional codes in a
continuous stream - Useful for low-latency communication
51Definition
- A type of error-correction code in which
- each k-bit information symbol (each k-bit string)
to be encoded is transformed into an n-bit
symbol, where ngtk - the transformation is a function of the last M
information symbols, where M is the constraint
length of the code
52Speciality
- k bits are input, n bits are output
- k and n are very small (usually k13, n26).
Frequently, we will see that k1 - Output depends not only on current set of k input
bits, but also on past input - The constraint length M is defined as the
number of shifts, over which a single message it
can influence the encoder output - Frequently, we will see that k1
53An Example
- A simple rate k/n 1/2 convolutional code encoder
(M3) - The box represents one element of a serial
register
54An Example (contd)
- The content of the shift registers is shifted
from left to right - Plus sign represents modulo-2 (XOR) addition
- Output by encoder are multiplexed into serial
binary digits - For every binary digit enters the encoder, two
code digits are output - A generator sequence specifies the connections of
a modulo-2 (XOR) adder to the encoder shift
register. - In this example, there are two generator
sequences, g11 1 1 and g21 0 1
55An Example (contd)
t0
When t3, the content of the initial state (x2,
x1, x0 ) is missing.
t1
t2
t3
56To Determine the Output Codeword
- There are essentially two ways
- State diagram approach
- Transform-domain approach
- Only concentrate on state diagram approach
- Contents of shift registers make up state of
code - Most recent input is most significant bit of
state - Oldest input is least significant bit of state
- (this convention is sometimes reverse)
- Arcs connecting states represent allowable
transitions - Arcs are labeled with output bits transmitted
during transition
57To Determine the Output Code Word ---State Diagram
- Rate k/n1/2 convolutional code encoder (M3)
- State is defined by the most (M-1) message bits
moves into the encoder
58State Diagram (contd)
- There are four states 00, 01, 10, 11
corresponding to the (M-1) bits - Generally, assuming the encoder starts in the
all-zero 00 state
59State Diagram (contd)
- Easiest way to determine the state diagram is to
first determine the state table as shown below
60State Diagram (contd)
- 1/01 means (for example), that the input binary
digit to the encoder was 1 and the corresponding
codeword output is 01
61Trellis Representation of Convolutional Code
- State diagram is unfolded a function of time
- Time indicated by movement towards right
62Code Trellis
- It is simply another way of drawing the state
diagram - Code trellis for rate k/n1/2 ,M3 convolutional
code shown below
63Encoding Example Using Trellis Diagram
- Trellis diagram, similar to state diagram, also
shows the evolution in time of the state encoder - Consider the r1/2, M3 convolutional code
64Encoding Example Using Trellis Diagram
65Distance Structure of a Convolutional code
- The Hamming distance between any two distinct
code sequences is the number
of bits in which they differ - The minimum free Hamming distance of a
convolutional code is the smallest Hamming
distance separating any two distinct code
sequences
66The Transfer Function
- This is also known as the generating function or
the complete path enumerator. - Consider the r1/2 , M3 convolutional code
example and redraw the state diagram.
67The Transfer Function (Cond)
- State a has been split into an initial state
a0and a final state a1 - We are interested in the number of paths that
diverge from the all aero path at state a at
some point in time and remerges with the all-zero
path. - Each branch transition is labeled with a term
, where are all integers such that - -----corresponds to the length of the branch
- -----Hamming weigh of the input zero for a 0
input and one for a 1 input - -----Hamming weight of the encoder output for
that branch
68The Transfer Function (Cond)
- Assuming a unity input, we can write the set of
equations - By solving these equations,
- From the transfer function, there is one path at
a Hamming distance of 5 from the all-zero path.
This path is of length 3 branches and corresponds
to a difference of one input information bit from
the all zero path. Other terms can be interpreted
similarly. The minimum distance is thus
5.
69Search for Good Codes
- We would like convolutional codes with large free
distance - Must avoid catastrophic codes
- Generators for best convolutional codes are
generally found via computer search - Search is constrained to codes with regular
structure - Search is simplified because any permutation of
identical generators is equivalent - Search is simplified because of linearity
70Best Rate ½ Codes
71Best Rate 1/3 Codes
72Best Rate 2/3 Codes
73Summary
- What is convolutional code
- The transformation of a convolutional code
- We can represent convolutional codes as
generators, block diagrams, state diagrams and
trellis diagrams - Convolutional codes are useful for real-time
applications because they can be continuously
encoded and decoded
74Assignment
- Question Construct the state table and state
diagram for the encoder below.
Binary information digits
Code digits
Input (k1)
Output (n3)
75THANK YOU