Data Engineering 04 07186 - PowerPoint PPT Presentation

1 / 228
About This Presentation
Title:

Data Engineering 04 07186

Description:

Data Engineering 04 07186 – PowerPoint PPT presentation

Number of Views:3230
Avg rating:3.0/5.0
Slides: 229
Provided by: drsiwo
Category:
Tags: data | engineering | fap

less

Transcript and Presenter's Notes

Title: Data Engineering 04 07186


1
Data Engineering 04 07186
  • Dr S. I. Woolley
  • http//www.eee.bham.ac.uk/woolleysi
  • Electronic, Electrical and Computer Engineering,
  • The University of Birmingham, U.K.

2
Lecture slides(Excluding guest
lectures)Please note that these lectures
slides do not form the complete assessable course
materials. Notes should be taken in class.
Your laboratory work, tutorial solutions and
private study work (including exercises given in
classes) all form part of the assessable
material.Please check the course web site and
bb forum for notices.
3
Summary of aims
  • To provide
  • An understanding of lossless and lossy data
    compression methods.
  • An understanding of the issues affecting data
    quality and quality assessment.
  • An appreciation of the effects of uncorrected
    channel errors.
  • An appreciation of data security issues and an
    understanding of encryption fundamentals.

4
The recommended text
  • The Data Compression Book
  • (recently out of print)
  • Mark Nelson and Jean-loup Gailly,
  • MT Books
  • 2nd Edition.
  • ISBN 1-55851-434-1

5
The course website and forum
  • www.eee.bham.ac.uk/woolleysi and
  • www.bb.bham.ac.uk

6
Private study and assessment
  • There are several tutorial and private study
    tasks identified in the text. Additional tasks
    may be set during the module. Students are
    reminded that these will form part of the
    assessable material.
  • Recommended laboratory exercise
  • Lossy and lossless image compression and quality
    assessment
  • Coursework
  • one assessed piece of coursework
  • It is essential that you supplement the summary
    notes during the course with the results of
    tutorial and private study tasks.

7
Contents
  • An introduction to data compression
  • Compression fundamentals
  • Lossy and lossless methods
  • Static and adaptive methods
  • Advantages and disadvantages
  • Tradeoffs and selection criteria
  • Lossless data compression methods
  • Run-length encoding
  • Huffman compression
  • Arithmetic compression
  • Lempel-Ziv compression ('77 and '78)

8
Contents
  • Lossy compression methods
  • DCT, wavelet and fractal methods
  • Quality considerations subjective vs. objective
    regions of interest/importance non-linear
    rate/distortions application/cost considerations
  • Audio coding overview of MIDI and MP3
  • Video compression an introduction to MPEG (MPEG1
    encoding laboratory)

9
Contents
  • Data security
  • Introduction to security issues
  • Basic encryption methods, DES, RSA and PGP
  • Channel errors and the effects of errors on
    compressed data
  • Introduction to error types amplitude and
    synchronization errors
  • Modelling errors Gilbert and modified Gilbert
    models
  • Investigation of error propagations in compressed
    data streams and consideration of remedial
    methods

10
Assessment
  • A simple fractal image compressor
  • A simple C implementation and report summarising
    code design, results and analysis.

11
  • Introduction

12
Data coding

13
Coding
  • The type of appropriate coding depends on, for
    example, the -
  • amount and type of information (audio, image,
    video, data, instrumentation)
  • available bandwidth and/or storage
  • type of channel (one-way, two-way)
  • delay tolerance
  • importance of the data
  • importance of secure communications
  • type of errors which can be tolerated
  • quality of the channel (types and frequency of
    errors)
  • system cost

14
Compression
Abbreviation is a simple example of
compression. E.g., A classified
advertisement "Lux S/C aircon refurb apt,
N/S, lge htd pool, slps 4, 850pw, avail wks or
w/es Jul-Oct.Tel (eves)" Decompresses
to Luxury self-contained refurbished apartment
for non-smokers. Large heated pool, sleeps 4,
850 per week, available weeks or weekends July
to October. Telephone (evenings)
15
Compression
  • Compression is as old as language.
  • Humans naturally compress. For example, commonly
    used words tend to be short (see next slide).
  • Receivers and senders naturally strive to
    establish a robust, error-free communication
    channel establishing shared vocabularies and
    shared models of their environments and
    experiences (based on shared knowledge).
  • With confidence in shared understandings,
    compression naturally evolves.

16
The 40 most commonly English used words
  • 1 the
  • 2 of
  • 3 to
  • 4 and
  • 5 a
  • 6 in
  • 7 is
  • 8 it
  • 9 you
  • 10 that
  • 11 he
  • 12 was
  • 13 for
  • 14 on
  • 15 are
  • 16 with
  • 17 as
  • 18 I
  • 19 his
  • 20 they
  • 21 be
  • 22 at
  • 23 one
  • 24 have
  • 27 or
  • 28 had
  • 29 by

30 hot 31 word 32 but 33 what 34 some
35 we 36 can 37 out 38 other 39 were 40
all
17
Run-length encoding
A very simple method for compressing long runs of
character repetitions 0000 0000 0000 5555 0000
0000 compresses to (12,0)(4,5)(8,0)
24 bytes reduced to 6 gives 24/6 4 . i.e.
a 41 compression ratio or should that be
(0,12)(5,4)(0,8)?
18
Patent issues (from comp.compression faq)
(a) Run length encoding Tsukiyama has two
patents on run length encoding 4,586,027 and
4,872,009 granted in 1986 and 1989
respectively. The first one covers run length
encoding in its most primitive form a length
byte followed by the repeated byte. The second
patent covers the 'invention' of limiting the
run length to 16 bytes and thus the encoding of
the length on 4 bits. Here is the start of
claim 1 of patent 4,872,009, just for pleasure
1. A method of transforming an input data
string comprising a plurality of data bytes,
said plurality including portions of a plurality
of consecutive data bytes identical to one
another, wherein said data bytes may be of a
plurality of types, each type representing
different information, said method comprising
the steps of ...
19
Popular compression
20
Text message examples
21
Text message quiz
  • IYSS
  • BTW
  • L8
  • OIC
  • PCM
  • IYKWIMAITYD
  • ST2MORO
  • TTFN
  • LOL
  • The abuse selection
  • lt-(
  • (
  • --------)
  • IUTLUVUBIAON

22
(No Transcript)
23
(No Transcript)
24
(No Transcript)
25
Data compression
  • Data compression requires the identification and
    extraction of source redundancy.
  • All meaningful data contains redundancy
    (patterns) in its uncoded form - otherwise it
    would resemble noise.
  • We can compress data by-
  • giving shorter codewords to frequently occurring
    symbols or words.
  • sending only the difference between symbols or
    pictures
  • sending only the difference between the data and
    some model of normal data
  • by referring to previously repeated strings
  • by sending approximations of signal values

26
Lossless and lossy compression
Compression methods which enable exact recovery
of the data on decompression are called lossless
or reversible. Methods which permanently remove
redundancy by approximating data values are
called lossy or irreversible.
27
Some data powers of ten
1 bit A binary decision (yes/no)
(on/off) 100 Kbyte A low resolution photograph
2 Mbyte A high resolution photograph 5
Mbyte Complete works of Shakespeare or 5 s of
TV-quality video 10 Mbyte A digital chest
X-ray 100 Mbyte 1 metre of shelved books 1
Gbyte A low resolution video 2 Terabytes (1
000 000 000 000 bytes) An academic research
library 2 Petabytes (1 000 000 000 000 000 bytes)
All US research libraries 5 Exabytes (1 000
000 000 000 000 000 bytes) All words ever
spoken Zettabyte (1 000 000 000 000 000 000
000 bytes) Yottabyte (1 000 000 000 000 000
000 000 000 bytes)
28
The compression trade-off
Delay Legal issues Specialized hardware Data
more sensitive to error Need for decompression
key
Reduced time and cost. More efficient storage
faster transmission.
29
Compression and channel errors
  • Noisy/busy channels are problematic for
    compressed data.
  • Unless compressed data is delivered 100
    error-free (i.e., no changes - no lost packets)
    the whole file is often destroyed.
  • E.g.

30
Robust methods
Decompress
Compress
Channel error/s
31
Compression
  • All meaningful data contains redundancy
    (patterns) in its uncoded form -
  • otherwise it would resemble noise.
  • We can compress data by-
  • giving shorter codewords to frequently occurring
    symbols or words.
  • sending only the difference between symbols or
    pictures
  • sending only the difference between the data and
    some model of normal data
  • by referring to previously repeated strings
  • by sending approximations of signal values

32
Measuring information
For any discrete probability distribution, the
value of the entropy function (H) is given
by-     (rradix 2 for binary) Units
bits/symbol Where the source alphabet has q
symbols of probability pi (I1..q). Note change
of base
33
Private study
  • Revise basic information theory.
  • Consider the advantages and disadvantages of data
    compression in real systems.
  • Investigate patent issues relevant to data
    compression.

34
  • Static vs. adaptive methods

35
Compression continued Static vs. adaptive
compression
Compression algorithms remove source redundancy
by using some definition (model) of the source
characteristics. Compression algorithms which
use a pre-defined source model are static.
Algorithms which use the data itself to fully
or partially define this model are referred to as
adaptive.    
36
Compression continued Static vs. adaptive
compression
  Static implementations can achieve very good
compression ratios for well defined sources.
  Adaptive algorithms are more versatile, and
update their source models according to current
characteristics. However, they have lower
compression performance, at least until a
suitable model is properly generated. The
understanding of source modes and trajectories is
important.  
37
  • Lossless compression - Huffman

38
Huffman compression
Source character frequency statistics are used
to allocate codewords for output. Compression
can be achieved by allocating shorter codewords
to the more frequently occurring characters
(e.g., in Morse E Y ---).  
39
Huffman compression
By arranging the source alphabet in descending
order of probability, then repeatedly adding the
two lowest probabilities and resorting, a Huffman
tree can be generated. The resultant codewords
are formed by tracing the tree path from the root
node to the codeword leaf.   Rewriting the
table as a tree, 0s and 1s are assigned to the
branches. The codewords for each symbols are
simply constructed by following the path to their
nodes.
40
Huffman compression
41
Huffman compression
42
Huffman compression
43
Huffman compression - exercises
Create a Huffman tree for the sources described
below char. frequency a 0.1 b 0.3
c 0.2 d 0.05 e 0.15 f 0.2 Generate the
compressed output for the source
c,b,d,b,d,d,e,a,d,f,d,b and calculate the entropy
of the original source (as described in 1), the
number of bits required for the encoded string
above and the compression ratio
achieved. Consider the effects of uncorrected
channel errors on a Huffman compressed source.
Try encoding a sequence and decoding it with
single bit errors in different locations. What
happens and why?
44
(No Transcript)
45
(No Transcript)
46
(No Transcript)
47
(No Transcript)
48
(No Transcript)
49
  • Lossless compression
  • - arithmetic

50
Arithmetic coding
The method of compression employed by Huffman
compression coding involves the allocation of
shorter codewords for more frequently occurring
characters. It is, however, unable to allocate
fractional codeword lengths, so that a character
must be allocated at least a one-bit codeword no
matter how high its frequency. Huffman coding
cannot, therefore, achieve optimal compression.
Arithmetic coding offers an alternative to
Huffman coding, enabling characters to be
represented as fractional bit lengths. This is
achieved by representing the source as a real
number, greater than or equal to zero, but less
than one, denoted as the range 0,1).  
51
Arithmetic coding patent issues (from
comp.compression faq)
IBM holds many patents on arithmetic coding
(4,122,440 4,286,256 4,295,125 4,463,342
4,467,317 4,633,490 4,652,856 4,792,954 4,891,643
4,901,363 4,905,297 4,933,883 4,935,882
5,045,852 5,099,440 5,142,283 5,210,536
5,414,423 5,546,080). It has patented in
particular the Q-coder implementation of
arithmetic coding. The JBIG standard, and the
arithmetic coding option of the JPEG standard
requires use of the patented algorithm.
52
Arithmetic coding (example)
53
  • Lossless compression
  • - Lempel-Ziv

54
Lempel-Ziv compression
  • Lempel-Ziv 1977 and 1978 (LZ77 and LZ78)
  • Welch 1984
  • LZW - implemented in many popular compression
    methods.
  • Lossless, universal (adaptive)
  • Exploits string-based redundancy
  • Not good for image compression (why?)
  • Unisys patent on .GIF implementation

55
LZ77 Patent issues (from comp.compression faq)
Waterworth patented (4,701,745) the algorithm
now known as LZRW1, because Ross Williams
reinvented it later and posted it on
comp.compression on April 22, 1991. The
same algorithm has later been patented by
Gibson Graybill. The patent office failed to
recognize that the same algorithm was patented
twice, even though the wording used in the two
patents is very similar. The Waterworth patent
is now owned by Stac Inc, which won a lawsuit
against Microsoft, concerning the compression
feature of MSDOS 6.0. Damages awarded were 120
million. (Microsoft and Stac later settled out
of court.)
56
LZ78 Patent issues (from comp.compression faq)
One form of the original LZ78 algorithm was
patented (4,464,650) by its authors Lempel,
Ziv, Cohn and Eastman. This patent is owned by
Unisys. The LZW algorithm used in 'compress'
is patented by IBM (4,814,746) and Unisys
(4,558,302). It is also used in the V.42bis
compression standard, in Postscript Level 2, in
GIF and TIFF. Unisys sells the license to
modem manufacturers for a onetime fee. CompuServe
is licensing the usage of LZW in GIF products
for 1.5 of the product price, of which 1 goes
to Unisys usage of LZW in non-GIF products
must be licensed directly from Unisys.
The IBM patent application was first filed three
weeks before that of Unisys, but the US patent
office failed to recognize that they covered
the same algorithm. (The IBM patent is more
general, but its claim 7 is exactly LZW.)
57
(No Transcript)
58
Lempel-Ziv dictionaries
  • How they work -
  • Parse data character by character generating a
    dictionary of previously seen strings
  • LZ77 sliding window dictionary
  • LZ78 full dictionary history
  • LZ78 Description
  • With a source of 8-bits/character (28 256,
    i.e., source characters 0-255 taken.) Extra
    characters needed to describe strings (in
    dictionary).
  • Output all 9-bit, i.e., 0-511.
  • Need to reserve some characters for special
    codewords say, 256-262, so dictionary entries
    begin at 263.
  • We can refer to dictionary entries as D1, D2, D3
    etc. (equivalent to 263, 264, 265 etc.)
  • Dictionaries typically grow to 12- and 15-bit
    lengths.

59
(No Transcript)
60
(No Transcript)
61
(No Transcript)
62
Lempel-Ziv continued
Compression of our example The original source
contains 13x8-bit characters (104 bits) and the
compressed output contains 10x9-bit codewords
(90 bits). Compression ratio (old size/new
size)1
63
Lempel-Ziv exercises
1. (i) How is decompression performed? Using
the source output from our example, build a
dictionary and decompress the source. (ii) Why
are 9-bit codewords required? (iii) What happens
when the dictionary has 256 entries? How can
it be extended? (iv) Why should the dictionary
size be limited and how can it be made more
efficient? 2. (i) How is decompression
performed? (ii) Using the source output from
our example, build a dictionary and
decompress the source. 3. Compress the
string less careless less lossless.
64
Lempel-Ziv private study
1. Identify applications of LZ-based methods and
investigate patent issues (see comp.compression
faq) 2. What is the LZ exception? How is it
interpreted?
65
(No Transcript)
66
(No Transcript)
67
  • Compression of images

68
  • Image data

69
The test image set
70
The digital image
71
Image pixels
72
x,y or r,c 0 or 1?
  • Common problems in manipulating image files
    involves the mismatch between (x,y) notation and
    matrix row,column i.e., (y,x). Both are
    correct but programs but treat them
    consistently.
  • The other common error is the 1/-1 error.
    Pixels can be defined as starting at 0 or 1,
    again both are correct but programs must be
    consistent.

73
mouse
Mouse.raw has 320x200 pixels. Mouse is an 8-bit
grey-scale test image we will use later. 8 bits
per pixel (bpp) means we have 256 values (Black0
white 255).
320
200
74
A closer look at mouse
75
Negative mouse (xn,yn)(xn,yn)-255
76
Reducing resolution Mouse at 80x50 pixels
77
Mouse with only 16 colours (i.e.,4 bits per
pixel)
78
Mouse with just 2 colours (i.e.,1 bit per pixel)
79
Paintshop Pro software (Jasc)
80
Paintshop Pro software (Jasc)
81
Differencing neighbouring pixels
82
Accessing .raw image data
83
Mouse.raw at 10x10 pixels!!
84
Opening a 10x10 pixel binary file in C
  • include "stdio.h"
  • include "stdlib.h"
  • int main(void)
  • printf("Very simple program for reading image
    values into an array ....\n")
  • FILE image_in
  • int r,c
  • unsigned char image1010 /image size in
    rows and columns/
  • image_infopen("mouse10.raw","rb") /open
    binary image for reading/
  • if (image_inNULL)
  • printf("File not found ... \n")
  • else
  • fread(image, 1, 100, image_in)
  • for (r0rlt10r)
  • for (c0clt10c)
  • printf("d\t",imagerc) /print values
    with tabs/

85
  • Lossless image compression

86
Lossless image compression
  • What images might we want to compress losslessly?
  • How would Huffman and arithmetic methods perform
    on images?
  • How could we improve their performance?

87
Differencing (W.C.Tham)
88
.GIF (Graphics Interchange Format)
  • Unisys patent
  • Differencing (DPCM Differential pulse code
    modulation)
  • LZ78
  • Lossless but limited to 8-bits (ie. 256 colours)
  • Recommended for graphics and line drawings and
    NOT natural or real-world images.

89
(No Transcript)
90
(No Transcript)
91
  • Human vision
  • Lossy image and video compression methods use
    models of the human visual system to determine
    which aspects of the data can be lost.
  • The human eye is a marvellously sophisticated and
    highly adaptive system. There is also a
    significant amount of image processing performed
    by the brain.
  • Newer compression methods attempt simple but
    improved methods of modelling regions of interest
    and the human visual system.

92
The Human Vision System (HVS)
  • The eye can adapt to a range of intensities in
    the order of 1010 from lowest visible light to
    highest bearable glare.
  • Non-linear contrast sensitivity
  • Cortex filter
  • Saccades and fixations

93
The rods and cones of the eye
  • Cones and rods two types of discrete light
    receptors.
  • 6-7 million cones centrally located (area of
    retina called the fovea) highly sensitive to
    colour and bright light do not work well in dim
    light. Each has its own nerve to the brain.
  • 75 million rods across the surface of the
    retina sensitive to light but not colour.
    Share nerve endings. Work in dim and bright
    conditions.

94
Eye-tracking
95
The portrait artists eye movement
96
Our research angiogram video
97
Foveated imaging
98
Foveated imaging
99
Foveated imaging
100
Visual perception selected examples
  • "The Mach band effect" (studied by Ernst Mach in
    the 1860s) involves an exaggeration of contrast
    at edges.
  • see http//www.langara.bc.ca/psychology/brightness
    .htm
  • "The change in the sensitivity of one part of the
    retina by the level of brightness excitation in
    an adjacent retinal area is called lateral
    inhibition and accounts for most of the contrast
    effects...the visual system responds mostly to
    variations in light intensity across the visual
    field which interact with each other within the
    visual pathways."

101
Contrast Sensitivity
0
1
2
3
4
Circle constant
Background constant
Just noticeable difference (JND) at 2
102
Contrast Sensitivity
0
1
2
3
4
Circle constant
Background constant
Just noticeable difference (JND) at 2
103
Contrast Sensitivity
0
1
2
3
4
Backgrounddifferent thanboth halves
Backgroundsame asright half
Just noticeable difference (JND) 4 (top) and
2 (bottom)
104
Contrast Sensitivity
0
1
2
3
4
Backgrounddifferent thanboth halves
Backgroundsame asright half
Just noticeable difference (JND) 4 (top) and
2 (bottom)
105
Interpreting images "Spot" the dog
Top right is a well-known image from textbooks on
visual perception. If you have seen it before
you will have no trouble detecting the
dog. Bottom right is an upside down
world. Below is the ambiguous bunny/duck and
seal/donkey.
























106
What do you see?
  • What do you see in each of the four images top
    right?
  • Bottom left (proximity) what is the
    relationship between the two circles?
  • Bottom compare the three images.

















107
Can you accurately read the pen colours in order?
  • red green blue green red yellow blueyellow red
    blue yellow green red blueblue yellow yellow
    blue red blue yellowred green green red green
    green greengreen blue blue yellow yellow
    yellowyellow red green yellow blue green
    redblue green red red green red green bluered
    yellow yellow red blue yellow blueyellow blue
    red blue green green yellowgreen red yellow blue
    yellow blue redblue red blue green red yellow
    bluegreen green red yellow blue yellow blue

108
(No Transcript)
109
  • Lossy image compression methods

110
Lossless and lossy compression
Compression methods which enable exact recovery
of the data on decompression are called lossless
or reversible. Methods which permanently remove
redundancy by approximating data values are
called lossy or irreversible.
111
What is Quality?
Measurement methods Objective- impartial
measuring methods Subjective- based on personal
feelings We need definitions of quality
(degree of excellence?) and to define how we
will compare the original and decompressed images.
112
Measuring Quality
Objectively- E.g.,Root Mean Square Error (RMSE)
Subjectively- E.g., Mean Opinion Scores
(MOS) 5very good 1 very poor or... 5
perfect, 4 just noticeable, 3 slightly
annoying, 2 annoying, 1 very annoying  
113
Lossy methods
301
114
Optimizing the trade-off
115
Original Approx. detail
116
The rate/distortion trade-off
117
Subjective testing - things to consider
  • Which images will be shown?
  • E.g., Direct comparison (is the original always
    visible?)
  • What are the viewing conditions?
  • Lighting, distance from screen, monitor
    resolution etc. etc.
  • What is the content and how important is it?
  • Is all content equally important
  • Who are the viewers and how do they perform?
  • Viewer expertise/ cooperation/ consistency/
    calibration (are viewers scores relevant to the
    application, consistent over time, consistent
    between each other)

118
(No Transcript)
119
The test image set
120
(No Transcript)
121
  • Lossy methods - DCT

122
DCT image compression
  • The philosophy behind DCT image compression is
    that the human eye is less sensitive to
    higher-frequency information (and also more
    sensitive to intensity than to colour), so that
    compression can be achieved by more coarsely
    quantising the large amount of high-frequency
    components usually present.
  • Firstly, the image must be transformed into the
    frequency domain. Since it would be
    computationally expensive to transform even a low
    resolution image in one full block, the image is
    subdivided.
  • The JPEG (Joint CCITT and ISO Photographic
    Experts Group) standard algorithm for full-colour
    and grey-scale image compression uses 8x8 blocks.

123
DCT image compression
  • The DCT itself does not achieve compression, but
    rather prepares the image for compression.
  • Once in the frequency domain the image's
    high-frequency coefficients can be coarsely
    quantised so that many of them (gt50) can be
    truncated to zero.
  • The coefficients can then be arranged so that the
    zeroes are clustered (zig-zag collection) and
    Run-Length Encoding (RLE), whereby repeated
    values are referenced and followed by a counter
    indicating the number of successive occurrences,
    can be applied.
  • The remaining data is then compressed with
    Huffman coding (arithmetic coding is also an
    option in the standard though patent issues have
    hindered application).

124
DCT stages
  • Blocking (8x8)
  • DCT (Discrete Cosine Transformation)
  • Quantization
  • Zigzag Scan
  • DPCM on DC component
  • RLE on AC Components
  • Entropy Coding

125
DCT compression the DCT bases
126
DCT mathematics
127
The JPEG quantization matrix
16 11 10 16 24 40 51 61 12 12 14
19 26 58 60 55 14 13 16 24 40 57
69 56 14 17 22 29 51 87 80 62
18 22 37 56 68 109 103 77 24 35 55
64 81 104 113 92 49 64 78 87 103 121
120 101 72 92 95 98 112 100 103 99
128
Nelsons simpler linear quantizer
129
The JPEG quantization
130
DCT compression
131
DCT compression
132
DCT compression
133
DCT compression
134
Gibbs phenomenon
  • The presence of artefacts around sharp edges is
    referred to as Gibb's phenomenon.
  • These are caused by the inability of a finite
    combination of continuous functions to describe
    jump discontinuities (e.g. edges).
  • At higher compression ratios these losses become
    more apparent, as does the blocked nature of the
    compressed form.
  • The loss of edge clarity can be clearly seen in a
    difference mapping comparing an original image
    with its heavily compressed equivalent.

135
Original test image
136
Lossy DCT reconstruction
Q25 CR 11.6 1
137
The difference (Gibbs phenomenon)
138
(No Transcript)
139
  • Lossy methods
  • - fractal

140
Iterative generation of the Serpinski
Triangle/Gasket
141
Fractal methods
  • Fractal compression attempts to reverse the
    process of fractal generation by searching an
    image for self-similarity and generating IFS
    descriptions.
  • Decompression is achieved by applying PIFS
    iteratively until convergence is achieved.
  • Asymmetry the compression process requires
    significant searching and calculation of best
    matches making is a slow process - the
    compression process requires simple PIFS
    constructions which is by comparison very fast.
  • Enhanced resolution Reconstructed images can be
    regenerated (calculated from the fractal
    self-similarities) such that they can appear
    higher resolution than the original!)

142
Fractal image compression
143
Fractal image compression
144
Fractal methods
  • Compression losses - these introduce shifting
    errors.
  • Reconstructed images look "painted".
  • They tend to look more natural compared to Gibbs
    and DCT blocking - more subjectively acceptable -
    though pixel-based error measures (e.g. RMSE) not
    so good.

145
Fractal compression theory - I
146
Fractal compression theory - II
147
Fractal compression theory - III
148
(No Transcript)
149
(No Transcript)
150
(No Transcript)
151
(No Transcript)
152
(No Transcript)
153
(No Transcript)
154
(No Transcript)
155
(No Transcript)
156
(No Transcript)
157
  • MPEG Digital Video
  • An introduction from 2001 lecture notes from Dr
    Nick Flowers

158
Contents
  • Analogue TV
  • Basic digital TV the problem
  • MPEG-1 definition
  • Decimation (spatial, temporal and colour)
  • Spatial compression
  • Temporal compression
  • Difference coding
  • Motion compensation
  • Errors GOP

159
Analogue TV
  • European TV format 625 scan lines, 25
    interlaced frames per second, 43 aspect ratio
  • Interlacing reduces the vertical resolution to
    312.5 lines
  • Horizontal resolution is 312.5(4/3) 417 lines
  • Bandwidth required 62541725 6.5MHz
  • Analogue colour information is cleverly added
    without increasing bandwidth (NTSC, PAL and SECAM
    standards)

160
Digital TV Raw video
  • For digital use, 8 bit resolution is adequate
  • For colour pictures we have Red, Green and Blue
    (RGB)
  • To digitise, we need to sample at twice the
    highest frequency (6.5MHz) and convert three
    colours (RGB) at 8 bits each
  • Bitrate (6.52) 3 8 312 Mbits/Sec
  • (compare with analogue bandwidth of 6.5MHz)
  • Digital TV seems to have created a big problem
    using raw digitisation we need coding to help

161
Coding of Moving Pictures and Associated Audio
for Digital Storage Media at up to about 1.5
Mbits/sec.International Standard IS-11172,
completed in 10.92
Commonly known as MPEG-1
  • Moving Picture Experts Group 1st phase
  • Video CD - A standard for video on CDs - VHS
    quality
  • Audio CDs have a data rate of 1.5Mb/s video
    has a raw data rate of 312Mb/s 200 times
    higher!
  • Something has to be lost

162
MPEG-1 decimation
  • This means just throwing data away not even
    attempting to preserve the data
  • Three different areas for decimation
  • Spatial
  • Colour
  • Temporal
  • (plus audio)
  • Temporal - Interlacing is dropped giving 25 full
    frames per second

163
Spatial decimation
625 Half-lines
  • European broadcast TV standard
  • Resolution is reduced to 352 (width) by 288
    (height) pixels
  • Source Input Format (SIF)

417 lines
288 pixels
352 pixels
164
Colour decimation
  • Human perception is most sensitive to luminance
    (brightness) changes
  • Colour is less important e.g. a black and white
    photograph is still recognisable
  • RGB encoding is wasteful human perception
    tolerates poorer colour.
  • Use YUV and only encode chrominance (UV) at half
    resolution in each direction (176 by 144)
    Quarter SIF) This gives 0.25 data for U and V
    compared to Y

165
Original (100)
0.5 UV (25)
0.25 UV (6.25)
0.2 UV (4)
0.1 UV (1)
0.0 UV (0)
166
Temporal decimation
  • Three standards for frame rate in use today
  • Cinema uses 24 FPS
  • European TV uses 25 FPS
  • American TV uses 30 FPS
  • Lowest acceptable frame rate is 25 FPS so little
    decimation can be achieved for Video CD
  • MPEG-1 does allow much lower frame rates e.g. for
    internet video but quality is reduced

15 FPS
25 FPS
167
Decimation the result
  • After throwing away all this information, we
    still have a data rate of (assuming 8 bits per
    YUV)
  • Y (352288) 25 8 20.3 Mb/s
  • U (352/2 288/2) 25 8 5.07 Mb/s
  • V (352/2 288/2) 25 8 5.07 Mb/s
  • TOTAL (for video) 30.45 Mb/s
  • MPEG 1 audio runs at 128Kb/s
  • Video CD - Target is 1.5Mb/sec
  • Space for video 1.5 0.128Mb/s 1.372Mb/s
  • So now use compression to get a saving of 221

168
Spatial compression
  • A video is a sequence of images and images can
    be compressed
  • JPEG uses lossy compression typical compression
    ratios are 101 to 201
  • We could just compress images and send these
  • Time does not enter into the process
  • This is called intra-coding (intra within)

169
Spatial compression
  • Very similar to JPEG
  • Image divided into 8 by 8 pixel sub-blocks
  • Number of blocks 352/8 by 288/8 44 by 36
    blocks
  • Each block DCT coded
  • Quantisation - dropping low-amplitude
    coefficients
  • Huffman coded
  • This produces a complete frame called an Intra
    frame (I)

170
Temporal compression
  • Spatial compression does not take into account
    similarities between adjacent frames
  • Talking Heads - Backgrounds dont change
  • Consecutive images (1/25th second apart) are very
    similar
  • Just send the difference between adjacent frames

171
Difference coding
  • Only send difference between this frame and
    previous frame
  • Result is very sparse high compression now
    possible using block-based DCT as before

172
Difference coding
  • Using the previous frame and the difference frame
    we can re-create the original this is called a
    predicted frame (P)
  • This re-created frame can then be used to form
    the next frame and the process repeated

173
Difference coding
  • Difference coding is good for talking heads
  • Not good for scenes with lots of movement

174
Motion compensation
  • Difference coding is good, but often an object
    will simply change position between frames
  • DCT coding not as good as for sparse difference
    image

175
Motion compensation
  • Video is three-dimensional (X,Y, Time)
  • DCT coding reduces information in X and Y
  • Stationary objects do not move in time
  • Motion compensation takes time into account
  • No need to code the image of the object just
    send a motion vector indicating where it has
    moved to

176
Motion compensation
  • Called Motion Compensation since we actually
    adjust the position of the object to compensate
    for the movement

177
Motion compensation the problems
  • Objects rarely move and retain their shape
  • If object moves and changes shape a little
  • Find movement and send motion vector
  • Subtract moved object in last frame from object
    in new frame
  • DCT code the difference
  • But what is an object? We have an array of
    pixels.
  • Could try and segment image into separate objects
    but very intense processing!
  • Simple option - split image up into blocks that
    dont correspond to objects in the image
    macroblocks

178
Macroblocks
  • Macroblocks can be any shape or size
  • If small, then we need to send lots of vectors
  • If large, then we are unlikely to find a matching
    macroblock
  • MPEG-1 uses a 16 by 16 pixel macroblock
  • Each macroblock is the unit for motion
    compensation
  • Find macroblock in previous frame similar to this
    one
  • If match found, send motion vector
  • Subtract this macroblock from previous displaced
    macroblock
  • DCT code the difference
  • If no matching block found, abandon motion
    compensation and just DCT code the macroblock

179
MPEG-1 compression
  • Eyes - difference data DCT coded
  • Ball - motion vector coded, actual image data not
    coded
  • Rabbit - Intra coded with no temporal compression
  • Coding method varies between macroblocks whole
    is a P frame

180
Group of pictures - GOP
  • Problem with P frames is any errors are
    propagated (like making copies of copies of
    copies) - so we regularly send full (I) frames to
    eliminate errors
  • Every 0.5 seconds approx we send a full frame (I)
  • I P P P P P P P P P P P I P P P P P P P P P
    P P I P P
  • ? GOP ?
  • In the event of an error, data stream is
    resynchronised after 12/25th of a second (or
    15/30th for USA)
  • The sequence between Is is called a Group Of
    Pictures

181
  • Data security and encryption

182
Security and Cryptographic Algorithms
  • Source A summary of Chapter Eleven and Simon
    Singh's "The Code Book"
  • A summary of threats
  • Security requirements
  • Steganography and cryptography
  • Substitution ciphers
  • Keys and the Caesar cipher
  • Cryptanalysis
  • The Vingenère cipher
  • DES (Data Encryption Standard)
  • Diffie-Hellman-Merkle key exchange
  • RSA (Rivest, Shamir and Adleman)
  • PGP (Pretty Good Privacy)

183
Network security threats
  • In formation can be observed and recorded by
    eavesdroppers.
  • Imposters can attempt to gain unauthorised access
    to a server.
  • An attacker can flood a server with requests,
    causing a denial-of-service for legitimate
    clients.
  • An imposter can impersonate a legitimate server
    and gain sensitive information from a client.
  • An imposter can place themselves in the middle,
    convincing a server that it is a legitimate
    client and a client that it is a legitimate
    server.

184
Network scrutiny
  • Footprinting
  • Gathering information on a network (creating a
    profile of an organizations security posture -
    identifying a list of network and IP addresses.)
  • Scanning
  • Identifying live and reachable target systems.
    (Ping sweeps, port scans, application of
    automated discovery tools).
  • Enumeration
  • Extracting account information. (Examining
    active connections to systems).

185
Passwords
  • Some users, when allowed to choose any password,
    select absurdly short ones.
  • Stallings quotes the example below from Purdue
    University.
  • People also tend to select guessable passwords.

186
Passwords
  • Stallings references a report which demonstrates
    the effectiveness of password guessing.
  • The author collected UNIX passwords from a
    variety of encrypted password files.
  • Nearly 25 of passwords were guessed with the
    following strategy-
  • Try users name, initials, account name (130
    permutations for each).
  • Try dictionary words - including the systems own
    on-line dictionary (60,000 words).
  • Try permutations of words from step above
    (Including making first letter uppercase or a
    control character, making the entire word
    uppercase, reversing the word, changing os to
    0s etc (another 1 million words to try).
  • More capitalization permutations (another million
    words to check).

187
(No Transcript)
188
Behaviour profiles
189
Security requirements
  • Privacy or confidentiality information should be
    readable only by the intended recipient.
  • Integrity the recipient can confirm that the
    message has not been altered during transmission.
  • Authentication it is possible to verify the
    identity of the sender and/or receiver.
  • Nonrepudiation The sender cannot deny having
    sent a given message.
  • The above requirements are not new and various
    security mechanisms have been used for many years
    in important transactions.
  • What is new is the speed at which break-in
    attempts can be made from a distance by using a
    network.

190
Steganography
  • Steganography (from Greek steganos-covered and
    graphein-to write) involves hiding the existence
    of a message.
  • Herodotus (chronicler of the 5th century BC
    Greece/ Persian conflicts) recounts how an
    important message was written onto the shaved
    head of a messenger and delivered when his hair
    had grown back.
  • Many hidden message systems have been used in the
    past
  • The Chinese wrote on fine silk which was covered
    in wax and swallowed.
  • A 16th century Italian scientist described how to
    write on a hard-boiled egg with alum and vinegar.
    The solution passes clearly through the shell
    but stained the egg.
  • The FB1 found the first microdot (a photographed
    page reduced to the size of a full stop pasted
    into a document) in 1941.
  • More recently images were shown to be easily
    communicated in the LSBs of higher-resolution
    images.

191
Cryptography
  • Cryptography (Greek kryptos-hidden) is the
    science of making messages secure.
  • The original message is the plaintext.
  • The encryption/decryption algorithm is called the
    cipher.
  • The encrypted message is the ciphertext.
  • (Simon Singh mentions that the correct title for
    his book would be The Code and Cipher Book.)
  • Cryptography can be divided into two branches
    transposition and substitution.

192
Substitution ciphers
  • The Kama-sutra recommends women study 64 arts
    including chess, bookbinding, carpentry and the
    art of secret writing suggesting a simple
    substitution cipher involving simple pairing of
    letters of the alphabet.
  • The Caesar cipher applies a simple shift between
    the plain alphabet and cipher alphabet. The
    exact shift can be considered as the cipher key.
  • An example of a 3 letter shifted Caesar cipher
    (lower case for plaintext and UPPERCASE for
    ciphertext.
  • a b c d e f g h i j k l m n o p q r s t u v w x y
    z
  • D E F G H I J K L M N O P Q R S T U V W X Y Z A B
    C

193
Keys and the Caesar cipher
  • The simple Caesar cipher has just 25 keys (i.e.,
    25 possible shifts). So that cryptanalysts could
    quickly break the code by trying all possible
    shifts.
  • Allowing any pair of substitutions results in
    many, many more combinations, approx. 4x1026 but
    the communication and safe preservation of the
    key becomes more difficult.
  • A compromise involves the use of a keyword or
    keyphrase, e.g.,
  • a b c d e f g h i j k l m n o p q r s t u v w x y
    z
  • J U L I S C A E R T V W X Y Z B D F G H K M N O P
    Q

194
Cryptanalysis
  • Singh describes how early Arab scholars invented
    cryptanalysis, for example, using frequency
    analysis to identify substitutions.
  • Relative frequencies of letters of the alphabet

195
The Vingenère cipher
  • The Vingenère cipher was published in 1586. It
    is a polyalphabetic cipher (as opposed to a
    monoalphabetic cipher) because it uses several
    cipher alphabets per message (and hence makes
    frequency cryptanalysis more difficult).
  • Again a key (keyword or keyphrase) is required.

196
(No Transcript)
197
DES The Data Encryption Standard
  • IBM invented "Lucifer", an encryption system
    adopted as the Data Encryption Standard (DES) in
    1976.
  • DES repeatedly scrambles (mangles) blocks of 64
    bits with an encryption key of 56bits.
  • The key was reduced from a longer key to 56bits
    as required by the American NSA (National
    Security Agency).

198
Triple DES
  • For added security ....DES can use two keys as
    follows-
  • CEK1(DK2(EK1(P)))
  • and
  • PDK1(EK2(DK1(P)))

199
The key distribution problem
  • How can secret keys be exchanged by parties who
    want to communicate?
  • In the late 1970s, banks distributed keys by
    employing special dispatch riders who had been
    vetted and were among the company's most trusted
    employees. They would travel across the world
    with padlocked briefcases, personally
    distributing keys to everyone who would receive
    messages from the bank over the next week.

200
Diffie-Hellman-Merkle
  • Whitfield Diffie and Martin Hellman
  • Diffie accepted a research position with Hellman
    and was later joined by Ralph Merkle at Stanford.
  • Diffie imagined two strangers (Alice and Bob)
    meeting on the Internet and wondered how they
    could send each other an encrypted message which
    an eavesdropper (Eve) could not read).
  • Although safe key exchange had been considered
    impossible ...

201
A simple padlock example
  • It is possible to imagine secure message exchange
    over an insecure communication system.
  • Imagine Alice sends a package to Bob securing it
    with a padlock. Bob can't open it but adds his
    own padlock to it and sends it back to Alice who
    removes her padlock and sends it back to Bob
    Bob can now open his own padlock. QED.
  • Alice and Bob both kept their keys safe and the
    package was never unlocked in the system.
  • The problem with applying this simple solution
    was the order of events. Encryption methods up
    to this time has required a "last on, last off"
    ordering.

202
One-way functions
  • Most mathematical functions are two-way. E.g.,
    doubling functions can be undone by halving.
    That is, most operations are reversible and the
    two operations tend to be of similar orders of
    complexity.
  • One-way functions are impossible, or very
    difficult to reverse.

203
Modular arithmetic for one-way functions
  • Solutions to modular arithmetic functions have
    apparently random results which makes guessing
    solutions based on adjacent results impossible.
  • x 1 2 3 4 5 6
  • 3x 3 9 27 81 243 729
  • 3x(mod7) 3 2 6 4 5 1
  • In the simple example above it is very easy to
    calculate 3x(mod7) given x, but more difficult
    to reverse the process, i.e., to find x given
    3x(mod7).
  • With larger values, e.g., 453x(mod 21,997), it
    is still relatively easy to encode x, but
    decoding would be extremely difficult.
  • In 1976, Diffie, Hellman and Merkle invented a
    system for safe key exchange using modular
    arithmetic to provide one-way functions.

204
Public-key cryptography
  • A disadvantage of the Diffie-Hellman-Merkle key
    exchange is that it requires interaction (mutual
    exchange of information) between Alice and Bob,
    i.e., spontaneous interchange of encrypted
    messages is not possible.
  • Diffie went on to specify the requirements for an
    asymmetric key system, i.e., a system where the
    encryption and decryption keys are different.
  • The encryption key is the public key and the
    decryption key is the private key.
  • Again, with the padlock analogy, the public key
    is like a padlock - anyone can lock it - but
    opening it requires a private key kept safe by
    the owner.
  • So Alice can encrypt messages to Bob (without any
    special exchanges) using his widely-available
    public key.

205
RSA (Rivest, Shamir and Adleman)
  • Rivest, Shamir and Adleman at MIT developed the
    necessary public-key cryptography (RSA) specified
    by Diffie.
  • RSA was announced in Scientific American in
    August 1977.
  • The system involves large primes, p and q which
    are multiplied together (Npxq) as part of the
    public key.
  • Factoring N into p and q is extremely difficult
    for large N.
  • For banking transactions, Ngt10308 provides an
    extremely high level of security (a hundred
    million PCs would take more than 1000 years to
    find p and q.)

206
RSA
  • Select two large primes, p and q
  • Npxq
  • Select an integer, e, ensuring e and (p-1)x(q-1)
    are relatively prime.
  • Public key N,e
  • (N should be unique, but e need not be)
  • The ciphertext, C, of a message, M, is given by
  • CMe(mod N)
  • So, everyone can encrypt their messages, since N
    and e are publicly available.
  • The private key, d, is calculated as
  • exd1(mod(p-1)x(q-1)
  • Decryption is performed by solving-
  • MCd(mod N)

207
Pretty Good Privacy (PGP)
  • Phil Zimmermann believed everybody had a right to
    the kind of privacy offered by RSA encryption.
    He developed a user-friendly implementation of
    RSA called PGP.
  • Since RSA is quite computationally complex,
    Zimmerman designed PGP to use RSA to encrypt the
    key of a cipher called IDEA, which is similar to
    DES.
  • With the key safely encrypted with RSA, all
    message data is then encrypted with the simple
    cipher, IDEA.
  • To send a message to Bob, Alice encrypts (and
    sends) an IDEA key with Bobs public RSA key and
    encrypts (and sends) her message with the IDEA
    key.
  • Bob uses his private RSA key to decrypt Alices
    IDEA key which he then uses to decrypt Alices
    message.

208
Digital signatures for verification
  • How can Bob be sure the message he receives is
    from Alice? Anyone can use his public key to
    encrypt messages for him.
  • Solution Alice can use her PRIVATE key to
    ENCRYPT the message (note - the private key is
    usually used for decryption).
  • Any message encrypted with the private key can be
    decrypted by the public key - so this is not
    secure (everyone has the public key) - but it
    does prove authorship.
  • So, if Alice encrypts with Bobs public key,
    privacy is guaranteed. If she encrypts with her
    private key, she can prove authorship.
  • To ensure privacy AND authorship - she first
    encrypts the message with her private key then
    encrypts the result with Bobs public key.

209
(No Transcript)
210
(No Transcript)
211
  • Channel errors and compressed data

212
Effects of errors on compressed data
Compress
Decompress
Channel error/s
Consider the simple run-length encoding example
again. 0000 0000 0000 5555 0000 0000
compressed to (12,0)(4,5)(8,0) with a single
error in the compressed version,
say (12,2)(4,5)(8,0) becomes 2222 2222 2222
5555 0000 0000 or (1,0)(4,5)(8,0) becomes 0555
5000 0000 0
213
Channel errors
  • Errors can occur singly or in bursts.
  • Most channel errors can be approximated by simple
    state models.
  • The BER (byte error rate) is a simple measure of
    channel quality.

The modified Gilbert model
In the Good state there are no errors. Bad Type
1 and and Bad Type 2 represent two types of error
events p1 and p2 prob. of start of errored
states q1 and q2 prob. of end of errored states
214
Channel errors
  • Burst length distributions provide important
    information about channel error activity.

increasing p1
probability
decreasing q1
increasing p2
decreasing q2
error length
215
Channel errors
  • Designing error protection systems
  • Know your enemy
  • Burst length distribution (and gap length
    distribution)
  • Trade-off the correction of data with the
    addition of redundancy
  • System cost/delay
  • Should errors be detected and corrected?
  • Detect and request retransmission? Detect and
    conceal?
  • How important is the data.
  • Is all data equally important? Should some data
    elements be protected more than others?
  • Is accepting bad data as good worse than
    rejecting good data as bad?

216
The Hamming (7,4) code
An example of an (n,k) block code. Each codeword
contains n bits, k information bits and (n-k)
check bits. Hamming (7,4) is a nice easy code
but is not very efficient (a 75 overhead!). It
generates codewords with a Hamming distance of
3, i.e., all codewords differ in 3 locations. It
can correct one bit in error and detect two. A
codeword C k1 k2 k3 k4 c1 c2 c3 k1-k4 are
information bits and c1-c3 are check bits (the
systematic redundancy) calculated as
- c1k1k2k4 c2k1k3k4 c3k2k3k4 So the
data X 0 1 1 0 becomes the codeword 0 1 1
0 (010) (010) (110) 0 1 1 0 1 1 0
217
The Hamming (7,4) code
If we insert an error at bit 2 x
(syndrome) 0 1 1 0 1 1 0 becomes 0 0 1 0 1 1
0 Note c1 and c3 are wrong - these intersect
at k2 hence k2 is in error
c1k1k2k4
1
k1
k2
0
0
0
k4
1
0
1
c2k1k3k4
k3
c3k2k3k4
218
The Hamming (7,4) code
If we insert an error at bit 4 x
(syndrome) 0 1 1 0 1 1 0 becomes 0 1 1 1 1 1
0 Note c1,c2 and c3 are all wrong hence k4 is
in error
c1k1k2k4
1
k1
k2
0
1
1
k4
1
0
1
c2k1k3k4
k3
c3k2k3k4
219
Product codes
Imagine we have 4 x 4 data bits k11-k14,
k21-k24, k31-k34, k41-k44 Arranging them
horizontally we can add error protection in the
vertical direction as well. k11 k12 k13
k14 c11 c12 c13 k21 k22 k23 k24 c21 c22
c23 k31 k32 k33 k34 c31 c32 c33 k41
k42 k43 k44 c41 c42 c43 d11 d12 d13 d14 f11
f12 f13 d21 d22 d23 d24 f21 f22 f23
d31 d32 d33 d34 f31 f32 f33
220
Interleaving
Interleaving (systematically reordering) the
protected data stream means that errors are
distributed, i.e., more correctable. For
example- k1 k2 k3 k4 c1 c2 c3 k1 k2 k3 k4 c1 c2
c3 k1 k2 k3 k4 c1 c2 c3 x x x
with a three-way interleave- k1 k1 k1 k2 k2 k2
k3 k3 k3 k4 k4 k4 c1 c1 c1 c2 c2 c2 c3 c3 c3
x x x the errors are now
distributed (and correctable)- k1 k2 k3 k4 c1 c2
c3 k1 k2 k3 k4 c1 c2 c3 k1 k2 k3 k4 c1 c2 c3
x x x

221
More sophisticated codes
Sophisticated interleaving strategies are used in
most advanced digital communication systems. CDs
use product codes but with Reed-Solomon codes
(not Hamming.) They also use interleaving. Reed-
Solomon (RS) codes work on groups (e.g., bytes)
of inputs. For bytes nlt28 (codewords are a max
of 255 bytes). There is only a very small
probability of crypto-errors, i.e., correcting
good bytes by mistake. They can correct (n-k)/2
bytes and detect (n-k) bytes in error. For
example RS(122,106) can correct (122-106)/2 8
bytes in error. Other systems use layers of
error correction. If a lower (simpler) layer
detects errors, the next (more powerful) layer is
inspected. If errors are still detected the
final layer is interrogated. This method
increases the speed of decoding by only computing
check bytes when errors are suspected. For
example, the DAT drive uses 3 layers of error
control.
222
Uncorrected channel errors and compressed data
  • Efficiently compressed data bits represent a
    significantly greater number of bits from the
    original sources, therefore the effects of any
    errors a
Write a Comment
User Comments (0)
About PowerShow.com