Title: INTRODUCTION TO ALGEBRAIC CODING THEORY
1Chapter 31
- INTRODUCTION TO ALGEBRAIC CODING THEORY
2Motivation
- Suppose you wish to send a message to either
execute or not a particular command 1 to
execute, 0 to not. The message is to be
transmitted thousands of miles away and is
susceptible to interference (noise) that could
alter the intended message. What could be done to
improve a safe delivery?
3Simple apply redundancy
- Start with the idea of majority rule. So if you
want to send the message of 0, turn it into
10001. Whichever number appears the most is the
intended one. - This involves the idea of independent error
(maximum-likelihood decoding) - So if you send a sequence of length 500 and each
digit has a probability of error q.01 (p.99).
SO the chance that the message is sent error free
equals (.99)500 or approx. .0066
4Three fold repetition scheme
- 1001 -gt 111000111111
- 1 -gt111
- Error can only occur if 001,010,100,000
- Q (error) (.01)(.01)(.99)(.01)(.99)(.01)(.99)(.
01)(.01)(.01)(.01)(.01) .000298 - P(no error) (.99702)500 .86
5Goal
- To devise reliable, efficient, and reasonably
easy to implement - Three basic features 1.set of
messages 2.method of encoding 3. method of
decoding
6Example 1
7Example 1 cont.
8Definition Linear Code
- An (n , k) linear code over a field F is a
k-dimensional subspace V of the vector space - F F F F
- n copies
- Over F. The members of V are called the code
words. When F is Z2 the code is called binary. - The message consist of k digits and the
redundancy of the remaining n - k
9Example 4
- The set 0000,0121, 0212, 1022, 1110, 1201, 2011,
2102, 2220 is a (4,2) linear code over Z3
called a ternary code.
10Definitions
- The Hamming distance between two vectors of a
vector space is the number of components in which
they differ. - Denoted d(u,v).
- The Hamming weight of a vector is the number of
nonzero components of the vector. Denoted wt(u). - The Hamming weight of a linear code is the
minimum weight of any nonzero vector in the code. - Example
- s 0010111, t 0101011, v 1101101
- d(s,t) 4, d(s,v) 5, wt(s) 4, wt(v) 5
11Theorem
- For any vectors u,v, and w, d(u,v) d(u,w)
d(w,v) and - d(u,v) wt(u v).
- This theorem and the previous definitions give
the - following.
- Theorem
- If the Hamming weight of a linear code is at
least 2t 1, - then the code can correct any to or fewer errors.
- Alternatively, the same code can detect any 2t or
fewer - errors.
- Note A linear code with Hamming weight 2t 1
can detect t - errors or 2t errors, not both.
12Example
- Hamming (7,4) code revisited (Table on p523)
- Hamming weight is 3 2 1 1.
- This code can detect 1 error or 1 or 2 errors.
13CHOSING THE G MATRIX
14Parity-check matrix decoding
-
- For any received word w, compute wH.
- If wH is the zero vector, assume no error was
made. - If there exist one instance of a nonzero element
sF and a row i of H s.t. wH is s times row i,
assume the sent word was w (0s0) where s
occurs in the ith component. If there is more
than one such instance, do not decode. - If wH does not fit into either category 2 or 3,
at least two errors occurred dont decode.
15ExampleHamming (7,4) codeThe generator matrix
G is
1 0 0 0 1 1 0
0 1 0 0 1 0 1
0 0 1 0 1 1 1
0 0 0 1 0 1 1
The parity-check matrix H is
1 1 0
1 0 1
1 1 1
0 1 1
1 0 0
0 0 1 0 0 1
16- Example (cont)
- Consider received word v 0000110, vH 110
- 110 is 1st row of H and no other, so an error was
made in the 1st position of v. - So the received word is really 1000110
17Lemma Orthogonality Relation
- Let C be an (n, k) linear code over F with
generator matrix G and parity-check matrix H.
Then for any vector in Fn, we have vH 0 (the
zero vector) iff v belongs to C
18Theorem 31.3 Parity-Check Matrix Decoding
- Parity-check matrix decoding will correct any
single error - if and only if
- the rows of the parity-check matrix are nonzero
and no one row is a scalar multiple of any other.
19Coset Decoding
- Do this by constructing a table called a standard
array. - 1st row is the set C of received code words
beginning with the identity 0 . . . 0 - choose another element v in V and consider the
coset v C - (where v is not an element already listed in
the table) - among the elements of v C, choose one of
minimum weight, - call it v.
- create the next row by placing vector v c
under the code - word c
- continue until all vectors in V have been
listed - (Note an (n, k) linear code over a field with
q elements will have - v c q( n- k) rows
- 1st column is called the coset leaders
20ExampleConsider (6,3) binary linear codeC
000000, 100110, 010101, 001011, 110011, 101101,
011110, 111000
21Theorem 31.4 Coset Decoding Is Nearest-Neighbor
Decoding
- In coset decoding, a received word w is decoded
as a code word c such that d( w, c) is a minimum
22DefinitionIf an (n, k) linear code over F has a
parity-check matrix H, then, for any vector u in
Fn, the vector uH is called the syndrome of u.
- Theorem
- Let C be an (n, k,) linear code over F with a
parity-check matrix - H. Then, two vectors of F n are in the same
coset of C iff they - have the same syndrome.
- We can use syndromes for decoding any received
word w - Calculate wH (syndrome)
- Find coset leader v such that wH vH
- Assume vector sent was w - v
23Example 11
1 1 0
1 0 1
0 1 1
1 0 0
0 1 0
0 0 1
- H
- Coset leaders 000000 100000 010000 001000
000100 000010 000001 100001 - Syndromes 000 110 101
011 100 010 001 111 - For v 101001, vH 100. Since the coset leader
000100 has 100 as its syndrome, v-000100 101101
was sent.