Title: Linear Block Codes
1Linear Block Codes
2Basic Definitions
- Let u be a k-bit information sequence
- v be the corresponding n-bit codeword.
- A total of 2k n-bit codewords constitute a (n,k)
code. - Linear code The sum of any two codewords is a
codeword. - Observation The all-zero sequence is a codeword
in every - linear block code.
3Generator Matrix
- All 2k codewords can be generated from a set of k
linearly independent codewords. - Let g0, g1, , gk-1 be a set of k independent
codewords. - v uG
4Systematic Codes
- Any linear block code can be put in systematic
form - In this case the generator matrix will take the
form - G P Ik
- This matrix corresponds to a set of k codewords
corresponding to the information sequences that
have a single nonzero element. Clearly this set
in linearly independent.
5(No Transcript)
6Generator Matrix (contd)
- EX The generating set for the (7,4) code
- 1000 gt 1101000 0100 gt 0110100
- 0010 gt 1110010 0001 gt 1010001
- Every codeword is a linear combination of these 4
codewords. - That is v u G, where
- Storage requirement reduced from 2k(nk) to
k(n-k).
7Parity-Check Matrix
- For G P Ik , define the matrix H In-k
PT - (The size of H is (n-k)xn).
- It follows that GHT 0.
- Since v uG, then vHT uGHT 0.
- The parity check matrix of code C is the
generator matrix of another code Cd, called the
dual of C.
8Encoding Using H Matrix(Parity Check Equations)
information
9Encoding Circuit
10Minimum Distance
- DF The Hamming weight of a codeword v , denoted
by w(v), is the number of nonzero elements in the
codeword. - DF The minimum weight of a code, wmin, is the
smallest - weight of the nonzero codewords in the
code.wmin min w(v) v ? C v ?0. - DF Hamming distance between v and w, denoted by
d(v,w), is the number of locations where they
differ.Note that d(v,w) w(vw) - DF The minimum distance of the code dmin min
d(v,w) v,w ? C, v ? 0 - TH3.1 In any linear code, dmin wmin
11Minimum Distance (contd)
- TH3.2 For each codeword of Hamming weight l there
exists l columns of H such that the vector sum of
these columns is zero. Conversely, if there exist
l columns of H whose vector sum is zero, there
exists a codeword of weight l. - COL 3.2.2 The dmin of C is equal to the minimum
numbers of columns in H that sum to zero. - EX
12Decoding Linear Codes
- Let v be transmitted and r be received, where
- r v e
- e ? error pattern e1e2..... en, where
- The weight of e determines the number of errors.
- We will attempt both processes error detection,
and error correction.
13Error Detection
- Define the syndrome
- s rHT (s0, s1, , sk-1)
- If s 0, then r v and e 0,
- If e is similar to some codeword, then s 0 as
well, and the error is undetectable. - EX 3.4
14Syndrome Circuit for 7,4) Hamming Code
15Error Correction
- s rHT (v e) HT vHT eHT eHT
- The syndrome depends only on the error pattern.
- Can we use the syndrome to find e, hence do the
correction? - Syndrome digits are linear combination of error
digits. They provide information about error
location. - Unfortunately, for n-k equations and n unknowns
there are 2k solutions. Which one to use?
16Example 3.5
- Let r 1001001
- s 111
- s0 e0e3e5e6 1
- s1 e1e3e4e5 1
- s2 e2e4e5e6 1
- There are 16 error patterns that satisfy the
above equations, some of them are0000010 1101010
1010011 1111101 - The most probable one is the one with minimum
weight. Hence v 1001001 0000010 1001011
17Standard Array Decoding
- Transmitted codeword is any one ofv1, v2, ,
v2k - The received word r is any one of 2n n-tuple.
- Partition the 2n words into 2k disjoint subsets
D1, D2,, D2k such that the words in subset Di
are closer to codeword vi than any other
codeword. - Each subset is associated with one codeword.
18Standard Array Construction
- 1. List the 2k codewords in a row, starting with
the all-zero codeword v1. - 2. Select an error pattern e2 and place it below
v1. This error pattern will be a correctable
error pattern, therefore it should be selected
such that - (i) it has the smallest weight possible (most
probable error) - (ii) it has not appeared before in the array.
- 3. Add e2 to each codeword and place the sum
below that codeword. - 4. Repeat Steps 2 and 3 until all the possible
error patterns have been accounted for. There
will always be 2n / 2k 2 n-k rows in the array.
Each row is called a coset. The leading error
pattern is the coset leader. - Note that choosing any element in the coset as
coset leader does not change the elements in the
coset it simply permutes them.
19Standard Array
- TH 3.3No two n-tuples in the same row are
identical. Every n-tuple appears in one and only
one row.
20Standard Array Decoding is Minimum Distance
Decoding
- Let the received word r fall in Di subset and lth
coset. - Then r el vi
- r will be decoded as vi. We will show that r is
closer to vi than any other codeword. - d(r,vi) w(r vi) w(el vi vi) w(el)
- d(r,vj) w(r vj) w(el vi vj) w(el
vs) - As el and el vs are in the same coset, and el
is selected to be the minimum weight that did not
appear before, thenw(el) ? w(el vs) - Therefore d(r,vi) ? d(r,vj)
21Standard Array Decoding (contd)
- TH 3.4Every (n,k) linear code is capable of
correcting exactly 2n-k error patterns, including
the all-zero error pattern. - EX The (7,4) Hamming code
- of correctable error patterns 23 8
- of single-error patterns 7
- Therefore, all single-error patterns, and only
single-error patterns can be corrected. (Recall
the Hamming Bound, and the fact that Hamming
codes are perfect. -
22Standard Array Decoding (contd)
- EX 3.6 The (6,3) code defined by the H matrix
23Standard Array Decoding (contd)
- Can correct all single errors and one double
error pattern
24The Syndrome
- Huge storage memory (and searching time) is
required by standard array decoding. - Recall the syndrome
- s rHT (v e) HT eHT
- The syndrome depends only on the error pattern
and not on the transmitted codeword.
25The Syndrome (contd)
- TH 3.6 All the 2k n-tuples of a coset have the
same syndrome. The syndromes of different cosets
are different. - (el vi )HT elHT (1st Part)
- Let ej and el be leaders of two cosets, jltl.
Assume they have the same syndrome. - ejHT elHT ?(ej el)HT 0.
- This implies ej el vi, or el ej vi
- This means that el is in the jth coset.
Contradiction.
26The Syndrome (contd)
- There are 2n-k rows and 2n-k syndromes
(one-to-one correspondence). - Instead of forming the standard array we form a
decoding table of the correctable error patterns
and their syndromes.
27Syndrome Decoding
- Decoding Procedure
- 1. For the received vector r, compute the
syndrome s rHT. - 2. Using the table, identify the coset leader
(error pattern) el . - 3. Add el to r to recover the transmitted
codeword v. - EX
- r 1110101 gt s 001 gt e
0010000 - Then, v 1100101
- Syndrome decoding reduces storage memory from
nx2n to 2n-k(2n-k). Also, It reduces the
searching time considerably.
28Hardware Implementation
- Let r r0 r1 r2 r3 r4 r5 r6 and s s0
s1 s2 - From the H matrix
- s0 r0 r3 r5 r6
- s1 r1 r3 r4 r5
- s2 r2 r4 r5 r6
- From the table of syndromes and their
corresponding correctable error patterns, a truth
table can be constructed. A combinational logic
circuit with s0 , s1 , s2 as input and e0 , e1 ,
e2 , e3 , e4 , e5 , e6 as outputs can be
designed.
29Decoding Circuit for the (7,4) HC
30Error Detection Capability
- A codeword with dmin can detect all error
patterns of weight dmin 1 or less. It can
detect many higher error patterns as well, but
not all. - In fact the number of undetectable error patterns
is 2k-1 out of the 2n -1 nonzero error patterns. - DF Ai ? number of codewords of weight i.
- Ai i0,1,,n weight distribution of the
code. - Note that Ao1 Aj 0 for 0 lt j lt dmin
31- EX Undetectable error probability of (7,4) HCA0
A7 1 A1 A2 A5 A60 A3A47Pu(E)
7p3(1-p)4 7p4(1-p)3 p7For p 10-2 ?Pu(E)
7x10-6 - Define the weight enumerator
- Then
- Let z p/(1-p), and noting that A01
32- The probability of undetected error can as well
be found from the weight enumerator of the dual
code - where B(z) is the weight enumerator of the dual
code. - When either A(z) and B(z) are not available, Pu
may be upper bounded byPu 2-(n-k) 1-(1-p)n - For good channels (p ?0) Pu 2-(n-k)
33Error Correction Capability
- An (n,k) code of dmin can correct up to t errors
where - It may be able to correct higher error patterns
but not all. - The total number of patterns it can correct is
2n-k - the code is perfect
34Hamming Codes
- Hamming codes constitute a family of single-error
correcting codes defined as - n 2m-1, k n-m, m ? 3
- The minimum distance of the code dmin 3
- Construction rule of H
- H is an (n-k)xn matrix, i.e. it has 2m-1 columns
of m tuples.The all-zero m tuple cannot be a
column of H (otherwise dmin1).No two columns
are identical (otherwise dmin2). Therefore, the
H matrix of a Hamming code of order m has as its
columns all non-zero m tuples. The sum of any
two columns is a column of H. Therefore the sum
of some three columns is zero, i.e. dmin3. -
35Systematic Hamming Codes
- In systematic form
- H Im Q
- The columns of Q are all m-tuples of weight ? 2.
- Different arrangements of the columns of Q
produces different codes, but of the same
distance property. - Hamming codes are perfect codesRight side
1n Left side 2m n1
36Decoding of Hamming Codes
- Consider a single-error pattern e(i), where i is
a number determining the position of the error. - s e(i) HT HiT the transpose of the ith
column of H. - Example
37Decoding of Hamming Codes (contd)
- That is, the (transpose of the) ith column of H
is the syndrome corresponding to a single error
in the ith position. - Decoding rule
- 1. Compute the syndrome s rHT
- 2. Locate the error ( i.e. find i for which sT
Hi) - 3. Invert the ith bit of r.
38Weight Distribution of Hamming Codes
- The weight enumerator of Hamming codes is
- The weight distribution could as well be obtained
from the recursive equations A01, A10 - (i1)Ai1 Ai (N-i1)Ai-1 CNi i1,2,,N
- The dual of a Hamming code is a (2m-1,m) linear
code. Its weight enumerator is