Title: 3F4 Error Control Coding
13F4 Error Control Coding
2Introduction
- Error Control Coding (ECC)
- Extra bits are added to the data at the
transmitter (redundancy) to permit error
detection or correction at the receiver - Done to prevent the output of erroneous bits
despite noise and other imperfections in the
channel - The positions of the error control coding and
decoding are shown in the transmission model
3Transmission Model
4Error Models
- Binary Symmetric Memoryless Channel
- Assumes transmitted symbols are binary
- Errors affect 0s and 1s with equal
probability (i.e., symmetric) - Errors occur randomly and are independent from
bit to bit (memoryless)
1-p
0
0
p is the probability of bit error or the Bit
Error Rate (BER) of the channel
p
IN
OUT
p
1
1
1-p
5Error Models
- Many other types
- Burst errors, i.e., contiguous bursts of bit
errors - output from DFE (error propagation)
- common in radio channels
- Insertion, deletion and transposition errors
- We will consider mainly random errors
6Error Control Techniques
- Error detection in a block of data
- Can then request a retransmission, known as
automatic repeat request (ARQ) for sensitive data - Appropriate for
- Low delay channels
- Channels with a return path
- Not appropriate for delay sensitive data, e.g.,
real time speech and data
7Error Control Techniques
- Forward Error Correction (FEC)
- Coding designed so that errors can be corrected
at the receiver - Appropriate for delay sensitive and one-way
transmission (e.g., broadcast TV) of data - Two main types, namely block codes and
convolutional codes. We will only look at block
codes
8Block Codes
- We will consider only binary data
- Data is grouped into blocks of length k bits
(dataword) - Each dataword is coded into blocks of length n
bits (codeword), where in general ngtk - This is known as an (n,k) block code
9Block Codes
- A vector notation is used for the datawords and
codewords, - Dataword d (d1 d2.dk)
- Codeword c (c1 c2..cn)
- The redundancy introduced by the code is
quantified by the code rate, - Code rate k/n
- i.e., the higher the redundancy, the lower the
code rate
10Block Code - Example
- Dataword length k 4
- Codeword length n 7
- This is a (7,4) block code with code rate 4/7
- For example, d (1101), c (1101001)
11Error Control Process
Source code data chopped into blocks
Codeword (n bits)
101101
Dataword (k bits)
Codeword possible errors (n bits)
Dataword (k bits)
Error flags
12Error Control Process
- Decoder gives corrected data
- May also give error flags to
- Indicate reliability of decoded data
- Helps with schemes employing multiple layers of
error correction
13Parity Codes
- Example of a simple block code Single Parity
Check Code - In this case, n k1, i.e., the codeword is the
dataword with one additional bit - For even parity the additional bit is,
- For odd parity the additional bit is 1-q
- That is, the additional bit ensures that there
are an even or odd number of 1s in the
codeword
14Parity Codes Example 1
- Even parity
- (i) d(10110) so,
- c(101101)
- d(11011) so,
- c(110110)
15Parity Codes Example 2
- Coding table for (4,3) even parity code
16Parity Codes
- To decode
- Calculate sum of received bits in block (mod 2)
- If sum is 0 (1) for even (odd) parity then the
dataword is the first k bits of the received
codeword - Otherwise error
- Code can detect single errors
- But cannot correct error since the error could be
in any bit - For example, if the received dataword is (100000)
the transmitted dataword could have been (000000)
or (110000) with the error being in the first or
second place respectively - Note error could also lie in other positions
including the parity bit
17Parity Codes
- Known as a single error detecting code (SED).
Only useful if probability of getting 2 errors is
small since parity will become correct again - Used in serial communications
- Low overhead but not very powerful
- Decoder can be implemented efficiently using a
tree of XOR gates
18Hamming Distance
- Error control capability is determined by the
Hamming distance - The Hamming distance between two codewords is
equal to the number of differences between them,
e.g., - 10011011
- 11010010 have a Hamming distance 3
- Alternatively, can compute by adding codewords
(mod 2) - 01001001 (now count up the ones)
19Hamming Distance
- The Hamming distance of a code is equal to the
minimum Hamming distance between two codewords - If Hamming distance is
- 1 no error control capability i.e., a single
error in a received codeword yields another
valid codeword - XXXXXXX X is a valid codeword
- Note that this representation is diagrammatic
only. - In reality each codeword is surrounded by n
codewords. That is, one for every bit that
could be changed
20Hamming Distance
- If Hamming distance is
- 2 can detect single errors (SED) i.e., a
single error will yield an invalid codeword - XOXOXO X is a valid codeword
- O in not a valid codeword
- See that 2 errors will yield a valid (but
incorrect) codeword
21Hamming Distance
- If Hamming distance is
- 3 can correct single errors (SEC) or can detect
double errors (DED) - XOOXOOX X is a valid codeword
- O in not a valid codeword
- See that 3 errors will yield a valid but
incorrect codeword
22Hamming Distance - Example
- Hamming distance 3 code, i.e., SEC/DED
- Or can perform single error correction (SEC)
X is a valid codeword O is an invalid codeword
23Hamming Distance
- The maximum number of detectable errors is
- That is the maximum number of correctable errors
is given by, - where dmin is the minimum Hamming distance
between 2 codewords and means the smallest
integer
24Linear Block Codes
- As seen from the second Parity Code example, it
is possible to use a table to hold all the
codewords for a code and to look-up the
appropriate codeword based on the supplied
dataword - Alternatively, it is possible to create codewords
by addition of other codewords. This has the
advantage that there is now no longer the need to
held every possible codeword in the table.
25Linear Block Codes
- If there are k data bits, all that is required is
to hold k linearly independent codewords, i.e., a
set of k codewords none of which can be produced
by linear combinations of 2 or more codewords in
the set. - The easiest way to find k linearly independent
codewords is to choose those which have 1 in
just one of the first k positions and 0 in the
other k-1 of the first k positions.
26Linear Block Codes
- For example for a (7,4) code, only four codewords
are required, e.g.,
- So, to obtain the codeword for dataword 1011, the
first, third and fourth codewords in the list are
added together, giving 1011010 - This process will now be described in more detail
27Linear Block Codes
- An (n,k) block code has code vectors
- d(d1 d2.dk) and
- c(c1 c2..cn)
- The block coding process can be written as cdG
- where G is the Generator Matrix
28Linear Block Codes
- ai must be linearly independent, i.e.,
- Since codewords are given by summations of the
ai vectors, then to avoid 2 datawords having the
same codeword the ai vectors must be linearly
independent
29Linear Block Codes
- Sum (mod 2) of any 2 codewords is also a
codeword, i.e., - Since for datawords d1 and d2 we have
So,
30Linear Block Codes
- 0 is always a codeword, i.e.,
- Since all zeros is a dataword then,
-
31Error Correcting Power of LBC
- The Hamming distance of a linear block code (LBC)
is simply the minimum Hamming weight (number of
1s or equivalently the distance from the all 0
codeword) of the non-zero codewords - Note d(c1,c2) w(c1 c2) as shown previously
- For an LBC, c1 c2c3
- So min (d(c1,c2)) min (w(c1 c2)) min (w(c3))
- Therefore to find min Hamming distance just need
to search among the 2k codewords to find the min
Hamming weight far simpler than doing a pair
wise check for all possible codewords.
32Linear Block Codes example 1
- For example a (4,2) code, suppose
a1 1011 a2 0101
33Linear Block Codes example 2
- Is an even single parity code
34Systematic Codes
- For a systematic block code the dataword appears
unaltered in the codeword usually at the start - The generator matrix has the structure,
k
R
R n - k
- P is often referred to as parity bits
35Systematic Codes
- I is kk identity matrix. Ensures dataword
appears as beginning of codeword - P is kR matrix.
36Decoding Linear Codes
- One possibility is a ROM look-up table
- In this case received codeword is used as an
address - Example Even single parity check code
- Address Data
- 000000 0
- 000001 1
- 000010 1
- 000011 0
- .
- Data output is the error flag, i.e., 0 codeword
ok, - If no error, dataword is first k bits of codeword
- For an error correcting code the ROM can also
store datawords
37Decoding Linear Codes
- Another possibility is algebraic decoding, i.e.,
the error flag is computed from the received
codeword (as in the case of simple parity codes) - How can this method be extended to more complex
error detection and correction codes?
38Parity Check Matrix
- A linear block code is a linear subspace Ssub of
all length n vectors (Space S) - Consider the subset Snull of all length n vectors
in space S that are orthogonal to all length n
vectors in Ssub - It can be shown that the dimensionality of Snull
is n-k, where n is the dimensionality of S and k
is the dimensionality of Ssub - It can also be shown that Snull is a valid
subspace of S and consequently Ssub is also the
null space of Snull
39Parity Check Matrix
- Snull can be represented by its basis vectors. In
this case the generator basis vectors (or
generator matrix H) denote the generator matrix
for Snull - of dimension n-k R - This matrix is called the parity check matrix of
the code defined by G, where G is obviously the
generator matrix for Ssub- of dimension k - Note that the number of vectors in the basis
defines the dimension of the subspace
40Parity Check Matrix
- So the dimension of H is n-k ( R) and all
vectors in the null space are orthogonal to all
the vectors of the code - Since the rows of H, namely the vectors bi are
members of the null space they are orthogonal to
any code vector - So a vector y is a codeword only if yHT0
- Note that a linear block code can be specified by
either G or H
41Parity Check Matrix
- So H is used to check if a codeword is valid,
R n - k
- The rows of H, namely, bi, are chosen to be
orthogonal to rows of G, namely ai - Consequently the dot product of any valid
codeword with any bi is zero
42Parity Check Matrix
and so,
- This means that a codeword is valid (but not
necessarily correct) only if cHT 0. To ensure
this it is required that the rows of H are
independent and are orthogonal to the rows of G - That is the bi span the remaining R ( n - k)
dimensions of the codespace
43Parity Check Matrix
- For example consider a (3,2) code. In this case G
has 2 rows, a1 and a2 - Consequently all valid codewords sit in the
subspace (in this case a plane) spanned by a1 and
a2 - In this example the H matrix has only one row,
namely b1. This vector is orthogonal to the plane
containing the rows of the G matrix, i.e., a1 and
a2 - Any received codeword which is not in the plane
containing a1 and a2 (i.e., an invalid codeword)
will thus have a component in the direction of b1
yielding a non- zero dot product between itself
and b1
44Parity Check Matrix
- Similarly, any received codeword which is in the
plane containing a1 and a2 (i.e., a valid
codeword) will not have a component in the
direction of b1 yielding a zero dot product
between itself and b1
45Error Syndrome
- For error correcting codes we need a method to
compute the required correction - To do this we use the Error Syndrome, s of a
received codeword, cr - s crHT
- If cr is corrupted by the addition of an error
vector, e, then - cr c e
- and
- s (c e) HT cHT eHT
- s 0 eHT
- Syndrome depends only on the error
46Error Syndrome
- That is, we can add the same error pattern to
different codewords and get the same syndrome. - There are 2(n - k) syndromes but 2n error
patterns - For example for a (3,2) code there are 2
syndromes and 8 error patterns - Clearly no error correction possible in this case
- Another example. A (7,4) code has 8 syndromes and
128 error patterns. - With 8 syndromes we can provide a different value
to indicate single errors in any of the 7 bit
positions as well as the zero value to indicate
no errors - Now need to determine which error pattern caused
the syndrome
47Error Syndrome
- For systematic linear block codes, H is
constructed as follows, - G I P and so H -PT I
- where I is the kk identity for G and the RR
identity for H - Example, (7,4) code, dmin 3
48Error Syndrome - Example
- For a correct received codeword cr 1101001
- In this case,
49Error Syndrome - Example
- For the same codeword, this time with an error in
the first bit position, i.e., - cr 1101000
- In this case a syndrome 001 indicates an error in
bit 1 of the codeword
50Comments about H
- The minimum distance of the code is equal to the
minimum number of columns (non-zero) of H which
sum to zero - We can express
Where do, d1, dn-1 are the column vectors of H
- Clearly crHT is a linear combination of the
columns of H
51Comments about H
- For a codeword with weight w (i.e., w ones), then
crHT is a linear combination of w columns of H. - Thus we have a one-to-one mapping between weight
w codewords and linear combinations of w columns
of H - Thus the min value of w is that which results in
crHT0, i.e., codeword cr will have a weight w (w
ones) and so dmin w
52Comments about H
- For the example code, a codeword with min weight
(dmin 3) is given by the first row of G, i.e.,
1000011 - Now form linear combination of first and last 2
cols in H, i.e., 011010001 0 - So need min of 3 columns ( dmin) to get a zero
value of cHT in this example
53Standard Array
- From the standard array we can find the most
likely transmitted codeword given a particular
received codeword without having to have a
look-up table at the decoder containing all
possible codewords in the standard array - Not surprisingly it makes use of syndromes
54Standard Array
- The Standard Array is constructed as follows,
All patterns in row have same syndrome
Different rows have distinct syndromes
- The array has 2k columns (i.e., equal to the
number of valid codewords) and 2R rows (i.e., the
number of syndromes)
55Standard Array
- The standard array is formed by initially
choosing ei to be, - All 1 bit error patterns
- All 2 bit error patterns
-
- Ensure that each error pattern not already in the
array has a new syndrome. Stop when all syndromes
are used
56Standard Array
- Imagine that the received codeword (cr) is c2
e3 (shown in bold in the standard array) - The most likely codeword is the one at the head
of the column containing c2 e3 - The corresponding error pattern is the one at the
beginning of the row containing c2 e3 - So in theory we could implement a look-up table
(in a ROM) which could map all codewords in the
array to the most likely codeword (i.e., the one
at the head of the column containing the received
codeword) - This could be quite a large table so a more
simple way is to use syndromes
57Standard Array
- This block diagram shows the proposed
implementation
58Standard Array
- For the same received codeword c2 e3, note that
the unique syndrome is s3 - This syndrome identifies e3 as the corresponding
error pattern - So if we calculate the syndrome as described
previously, i.e., s crHT - All we need to do now is to have a relatively
small table which associates s with their
respective error patterns. In the example s3 will
yield e3 - Finally we subtract (or equivalently add in
modulo 2 arithmetic) e3 from the received
codeword (c2 e3) to yield the most likely
codeword, c2 -
59Hamming Codes
- We will consider a special class of SEC codes
(i.e., Hamming distance 3) where, - Number of parity bits R n k and n 2R 1
- Syndrome has R bits
- 0 value implies zero errors
- 2R 1 other syndrome values, i.e., one for each
bit that might need to be corrected - This is achieved if each column of H is a
different binary word remember s eHT
60Hamming Codes
- Systematic form of (7,4) Hamming code is,
- The original form is non-systematic,
- Compared with the systematic code, the column
orders of both G and H are swapped so that the
columns of H are a binary count
61Hamming Codes
- The column order is now 7, 6, 1, 5, 2, 3, 4,
i.e., col. 1 in the non-systematic H is col. 7 in
the systematic H.
62Hamming Codes - Example
- For a non-systematic (7,4) code
- d 1011
- c 1110000
- 0101010
- 1101001
- 0110011
- e 0010000
- cr 0100011
- s crHT eHT 011
- Note the error syndrome is the binary address of
the bit to be corrected
63Hamming Codes
- Double errors will always result in wrong bit
being corrected, since - A double error is the sum of 2 single errors
- The resulting syndrome will be the sum of the
corresponding 2 single error syndromes - This syndrome will correspond with a third single
bit error - Consequently the corrected codeword will now
contain 3 bit errors, i.e., the original double
bit error plus the incorrectly corrected bit!
64Bit Error Rates after Decoding
- For a given channel bit error rate (BER), what is
the BER after correction (assuming a memoryless
channel, i.e., no burst errors)? - To do this we will compute the probability of
receiving 0, 1, 2, 3, . errors - And then compute their effect
65Bit Error Rates after Decoding
- Example A (7,4) Hamming code with a channel BER
of 1, i.e., p 0.01 - P(0 errors received) (1 p)7 0.9321
- P(1 error received) 7p(1 p)6 0.0659
- P(3 or more errors) 1 P(0) P(1) P(2)
0.000034
66Bit Error Rates after Decoding
- Single errors are corrected, so,
- 0.9321 0.0659 0.998 codewords are correctly
detected - Double errors cause 3 bit errors in a 7 bit
codeword, i.e., (3/7)4 bit errors per 4 bit
dataword, that is 3/7 bit errors per bit. - Therefore the double error contribution is
0.0023/7 0.000856
67Bit Error Rates after Decoding
- The contribution of triple or more errors will be
less than 0.000034 (since the worst that can
happen is that every databit becomes corrupted) - So the BER after decoding is approximately
0.000856 0.000034 0.0009 0.09 - This is an improvement over the channel BER by a
factor of about 11
68Perfect Codes
- If a codeword has n bits and we wish to correct
up to t errors, how many parity bits (R) are
needed? - Clearly we need sufficient error syndromes (2R of
them) to identify all error patterns up to t
errors - Need 1 syndrome to represent 0 errors
- Need n syndromes to represent all 1 bit errors
- Need n(n-1)/2 to syndromes to represent all 2 bit
errors - Need nCe n!/(n-e)!e! syndromes to represent all
e bit errors
69Perfect Codes
If equality then code is Perfect
- Only known perfect codes are SEC Hamming codes
and TEC Golay (23,12) code (dmin7). Using
previous equation yields
70Summary
- In this section we have
- Used block codes to add redundancy to messages to
control the effects of transmission errors - Encoded and decoded messages using Hamming codes
- Determined overall bit error rates as a function
of the error control strategy