Error Concealment for Video Communication - PowerPoint PPT Presentation

1 / 47
About This Presentation
Title:

Error Concealment for Video Communication

Description:

Error Concealment for Video Communication Presentation: Guo Li 04/11/2002 Introduction Various channel/network errors can result in damage to or loss of compressed ... – PowerPoint PPT presentation

Number of Views:132
Avg rating:3.0/5.0
Slides: 48
Provided by: Gauta8
Learn more at: https://cis.temple.edu
Category:

less

Transcript and Presenter's Notes

Title: Error Concealment for Video Communication


1
Error Concealment for Video Communication
  • Presentation Guo Li
  • 04/11/2002

2
  • Introduction
  • Various channel/network errors can result in
    damage to or loss of compressed video information
    during transmission or storage.
  • The distortion can range from momentary
    degradation to a completely unusable image or
    video signal. Hence it is necessary for the
    decoder to perform error concealment to minimize
    the observed distortion.
  • Owing to various constraints such as coding
    delay, implementation complexity, and need for
    availability of a good source model,a compressed
    video bitstream still possesses a certain degree
    of statistical redundancy. In addition, the human
    perception system can tolerate a limited degree
    of signal distortion. These factors can be
    exploited for error concealment at the decoder.
  • It is both necessary and possible to perform
    error concealment at the decoder when
    transmission error occurs.

3
  • Spatial domain
  • A lost packet may damage only part of a
    macroblock or several adjacent macroblocks, a
    damaged block is typically surrounded by multiple
    undamaged blocks in the same frame, rely on the
    undamaged neighbors for concealing the damaged
    block.
  • Temporal domain
  • The loss of a packet may damage a big portion or
    even the entire picture, the previous video frame
    can be used for concealment of the damaged image
    part.
  • In practice, error is typically implemented as a
    combination of techniques of these 2 kinds.

4
Coding Modes
  • Video frame is divided into macroblocks which
    consists of several blocks.
  • Intra frame coding each block is transformed by
    using block DCT .
  • Inter frame coding a motion vector is found,
    which specifies its corresponding prediction
    macroblock in its previous frame.

5
Intra frame coding
6
Inter frame coding
7
  • Error Detection in Video Decoder
  • At the VLC decoding level, the VLC being used is
    not a complete code, once the decoder finds a
    codeword not in its decoding table, a
    transmission error is declared.
  • In pixel domain, after decoding, the difference
    between adjacent blocks can be computed. When a
    difference exceeds a certain threshold, a
    transmission error has been detected. Mitchell
    Tabatabai
  • Synchronization codewords are inserted at certain
    predefined spatial locations within a picture. In
    H.261 and H.263, they are located at the
    beginning of groups of blocks(GOBs)

8
  • Error Detection at Transport level
  • Packet-based networkadd a header to each
    transport packet with a sequence number field.
  • Circuit switched using forward error
    control(FEC)
  • FEC frame structure

9
(No Transcript)
10
  • Significant spatial redundancy in nature image
    and video signals
  • Interpixel difference is defined as the average
    of the absolute difference between a pixel and
    its four neighboring pixels.
  • Figure 6.3histogram of pixel difference for a
    natural science image.

11
Spatial domain interpolation
  • Interpolate pixel values within a damaged
    macroblock from its four 1-pixel-wide boundaries
    Aign Fazel
  • First variation
  • Each pixel within a block is interpolated
    from 2 pixels in its two nearest boundaries
    outside the damaged macroblock.
  • Second variation
  • whole macroblock is treated as 1 entity, a
    pixel is interpolated from the 4 macroblock
    boundaries.

12
(No Transcript)
13
(No Transcript)
14
Maximally Smooth Recovery
  • Open question how to reconstruct a DCT coded
    image when some of the dct coefficients are lost?
  • An effective approach hierarchical or layered
    transmission, segments the transform coefficients
    into a few bands and transmit them with different
    priorities. I.e., in a two layer system , one
    band consists of the DC and certain low frequency
    coefficients, with higher transmission priority.
    Another band encompasses the rest coefficients
    and can be discarded in the case of channel
    congestion.
  • Advantage even if the fine details of an image
    may be lost, a coarse resolution rendition is
    always guaranteed.
  • Potential problem very costly, error-free
    transport channel has to be allocated to the most
    important subsignal.

15
Maximally Smooth Recovery
  • A transmission error loss in one band will only
    result in the loss of a partial set of
    coefficients in the damaged block.
  • Proposed method minimizing the intersample
    differences wihin each damaged block and between
    adjacent blocks. The boundary information is
    propagated into the damaged blocks such that the
    transition along block boundaries are as smooth
    as possible.

16
Maximally Smooth Reconstruction Criterion
  • Because of the luminance level in most images
    does not changely, we can require that samples in
    a recovered block be smoothlynconnected with each
    other and with the Smples in adjacent blocks.
    This idea has been successfully used for solving
    the surface reconstruction problem.
  • Using a similar approach, it is possible to
    recover a damaged block such that , among all
    the blocks with the same transform coefficients
    received, the reconstructed one is the most
    smooth one in terms of a chosen smoothness
    measure. A complete image is obtained by first
    reconstructing all undamaged blocks and then
    damaged block individually. The reconstruction of
    the undamaged blocks in advance provides the
    necessary boundary information for the recovery
    of the damaged one.

17
Maximally Smooth Reconstruction Criterion
  • Let B represent a block composed of N samples and
    f(m,n) the original value of sample (m,n) in B. m
    refers to the row index, n the column index.
  • An arbitrary unitary transform given


  • (1),


  • (2)

18
Maximally Smooth Reconstruction Criterion
  • An appropriate smoothness measure

  • (3)

19
  • The constants are called smoothing weights,
    either 9 or 1 depending on whether or not the
    variations at the point (m,n) in the
    corresponding directions should be suppressed.
  • smoothing constraints should be imposed
    between every 2 adjacent samples across the
    border of the block in order to propagate the
    boundary information into the block.
  • the measure function involves samples in the
    one-pixel wide boundary outside the block,
    referred to as external boundary samples.
  • f(m,n) b(m,n) for (m,n)
    Not in B. (4)
  • The reconstruction is to find ak for all k
    such that when combined with (2), they
    minimize the error in (3) among all those
    solutions that satisfy the boundary condition
    defined by (4).

20
When the Dc coefficient alone is lost, the
smoothing constraint need be enforced only
between the boundary samples, as illustrated in
(a) when additional coefficients are lost,
necessary to suppress the variation at each point
in the direction to towars its nearest boundary,
as shown in (b).
21
  • Optimal Solution under the Smoothing Criterion
  • Let f represent the vector composed of
    the original sample values f(m,n) in B, and a
  • The vector composed of the transform
    coefficients ak, k0,1..N-1. then the unitary
    transform defined in (1) becomes
  • Here,Tt0,t1,tN-1,in which tk is the
    kth transform basis vector consisting of

22
Let ar represent the subvector containing the
correctly received coefficients and al the
subvector including the estimates of the lost
coefficients. Let Tr and Tl be the submatrices
composed of the column of T corresponding to the
entries of ar and al respectively. Then
(6)
The measure function can now be written as
(7)
The matrices depend on the weights for a
smoothing constraints in 4 direction
23
  • (7) Is a quadratic function of al since f is
    linearly related to al according (6), minimal
    point aopt is obtained by

(8)
choose the smoothing constraints properly,
guarantee non-singular,
(9)
24
The proposed procedure is effective only for
the lost DC and some low-frequency coefficients.
Substituting aopt from (9) into (6), we obtain
the maximally smooth solution of the damaged
block
(10)
The matrices C and D can be considered as the
interpolation filters for estimating f from the
boundary values b and the received coefficients ar
25
Concealment of damaged Block transform coded
Images using projections onto Convex sets
  • Utilize spatially correlated information more
    thoroughly by performing interpolation based on a
    large local neighborhood of surrounding pels and
    to restore edges that are continuous with those
    present in the neighborhood. Ramamurthi and
    Gersho have demonstrated that edges play an
    important role in the subjective quality of
    images

26
Proposed Method
  • Space variant restoration is a method of
    adaptively filtering a degraded image to suit the
    local image characteristics.
  • First to estimate whether the missing block to
    be restored belongs to a monotone or edge area of
    the image.
  • For an edge portion of the image, orientation of
    the edge should be estimated, surrounding valid
    decoded blocks are used as an aid in determining
    edge angle.

27
Block Classifier and edge Orientation Detector
  • Restore lost image blocks by extending edges
    present in the surrounding neighborhood.
  • If no edges are present in the surrounding
    neighborhood, the lost block is restored by a
    smoothing process.
  • Gradient measures in the spatial domain can be
    used for estimating edge orientation.

28
Gxxi1,j-1-xi-1,j-12xi1,j-2xi-1,jxi1,j1-xi-1
,j1Gyxi-1,j1-xi-1,j-12xi,j1-2xi,j-1xi1,j
1-xi-1,j1
Sobel mask operators Sx-1 0 1 -2 0 2 -1
0 1, Sy1 2 1 0 0 0 -1 2
-1 Magnitude and angular direction of the
gradient at coordinate (i,j) are GSqrt
(gx2gy2) einv(tan(gy/gx)) The gradient
measure is computed for every (i,j) coordinate in
the neighborhood surrounding the missing
macroblock. Gradient angle value is rounded to
the nearest 22.5. Corresponding to one of the 8
directional categories.
29
Each direction has a counter, a voting machine is
used that involves incrementing the selected
category counter by the magnitude of the gradient
if a line is drawn through the pixel at (i,j)
with orientation e passes through the missing
block.
30
pseudocode
  • Do over all (i,j) pixel coordinates in
    neighbourhood N
  • Compute G and from equation (7)
  • Kround(/) 8 mod 8
  • If line drawn through (i,j) with angle
    intersects M
  • DkDkG

  • (8)
  • Kmaxargmax(Dk) (9)
  • if Dkmax lt T
  • M ? monotone area
  • else
  • M ? edge area with orientation given
  • by index kmax
  • -----------------------(10)

31
Projections onto convex sets
  • Pocs have been applied to various image
    restoration problems where a priori information
    can be used to constrain the size of the feasible
    solution set
  • Nice properties of typical video images
  • Smoothness
  • Edge continuity
  • Consistency with known values
  • Formulate the properties as convex
    constraints.make use of the following constraints
    and projection operators

32
The class of signals that takes on a prescribed
set of known values
  • Xi is the ith components of vector x and ki are
    known constants. Projection operator P1 onto
    convex set C1 is given by

33
Class of signals that takes on a prescribed set
of transform coefficients
  • T is a linear transform operator, TxI is the
    ith transform coefficient, Zi are known
    constants. Projection P2 onto convex set C2 is

34
P2.smooth acts as a lowpass filter bandpass
directional filter.

35
  • P2,smooth acts as a lowpass filter that sets high
    frequency coefficients located outside the
    bandwidth radius specifiled by Rth to zero and
    leaves low-frequency coefficients unchanged. Fig
    5(a) illustrates the filter corresponding to this
    projection. The shaded regions denote the
    passband of the filter with unity gain and the
    unshaded regions denote the stopband of the
    filter with zero gain.
  •  
  • In edge area of the image,the spectrum has a
    bandpass characteristic in which energy is
    localized in transform coefficients that lie in a
    direction orthogonal to the edge the other
    coefficients are very small. Projection P2 then
    becomes
  • 0,
    m-ntan() gt Bth
  • TP2,edgex m,n
  • Txm,n ,
    otherwise. (19)

36
Transformed coefficients are filtered by the
adaptive filter according to the type of the
large block(monotone or edge area)
  • The filtered coefficients are used to reconstruct
    the image using inverse transform, the portion of
    the reconstructed image at the location of the
    damaged part is sent back to the input for next
    iteration.
  • The signal to be restored, f , can be found
    through
  • fi1P1P2fi

37
Temporal estimation of Blocks with Missing Motion
vectors
  • Motion vectors are estimated over quite large
    macroblocks(1616 for most video coding standards
    and algorithms ), MV of adjacent macroblocks may
    not produce a reliable estimate . Consider four
    88 blocks u1,u2,l1,l2 in the macroblocks
    adjacent to the damaged macroblock,

38
  • First, for each of these blocks, a motion vector
    is estimated and its corresponding 88 block in
    the previous frame is determined. The motion
    vector is found by means of an exhaustive search
    in an area to the siz eof a macroblock around the
    center of its motion-compensated block in the
    previous frame.
  • For each corresponding block, a surrounding
    macroblock is determined, labeled U1,U2,L2,L2,
    the final estimated macroblock is a weighted
    average of these macroblocks
  • Pwu1U1 wu2U2 wl1L1 wl2L2
  • Where wu1 , wu2U2 , wl1L1 ,wl2L2 are chosen so
    that the sum of the squares of interpixel
    differences along the left and upper boundaries
    of the macroblock is minimized.

39
Using motion field interpolation for error
concealment
  • Motion field interpolation(MFI) uses different
    motion vectors for each pixel to model
    nontranslational motion such as rotation and
    scaling. The MV for each pixel is interpolated
    from the Mv at several control points, the motion
    compensation is applied to each pixel
    separately.Mv from adjacent blocks are used in
    error concealment. A bilinear MFI is performed
    as

40
  • Where VL, VR, and VB are the MVs of the blocks to
    the left, to the right, to the above, and below
    of the current block, xn and yn are defined as

41
Multiframe-Based Error Concealment
  • the use of motion compensation causes error
    produced during concealment of a block to still
    propagate into the following frames.
  • Error concealment can be achieved by employing
    information from multiple frames.
  • An overlapping block transform can also be used,
    where 99 blocks are used for DCT with one
    pixel overlapping along the edges.

42
Recovery of motion vectors and coding Modes
  • If the coding mode and MVs are damaged, they can
    be similarly interpolated from those in
    spatially and temporally adjacent blocks.
  • Code mode can take 3 different values
  • Intra
  • Skipped
  • Nonzero Motion vector
  • Simplest way to estimate the coding mode
  • Use intra mode and estimated DCT coefficients of
    the damaged block.
  • A better way interpolate from adjacent blocks.

43
  • Estimating lost MVs
  • Simply setting the MVs a s zero, well for video
    sequences with relatively small motion.
  • Using the Mvs of the corresponding blocks in the
    previous frame
  • Using the avg of th Mvs from spatially adjacent
    blocks
  • Using the avg of th Mvs from spatially adjacent
    blocks
  • Using the median of th Mvs from spatially
    adjacent blocks

44
Error recovery with out-of-order decoding
  • Decode video packets that arrive at the receiver
    out of the order, and merge the newly decoded
    info with previously decoded but corrupted video
    sequence, so that lossless recovery is achieved.
  • The idea is shown with a simply one-dim signal
  • Figure 6.14

45
(No Transcript)
46
conclusion
  • All error concealment techniques recover lost
    info by using priori knowledges about the
    image/Video signals, primarily tempotal and
    spatial smoothness property.
  • In the maximally smooth recovery, isotropic
    smoothness measure is used everywhere, this is
    not appropriate near image edges.
  • To over come this, POCS are introduced, which is
    more computationally intensive because of many
    iterations.
  • The interpolation method can be considered to be
    a special case of the optimization method iff
    boundary pixels in adjacent blocks are used, and
    if the interpolation coefficients are derived by
    maximizing the smoothness measure.
  • Because of the heavy use of motion compensated
    prediction, coding mode and motion vectors also
    play a very important role.
  • Simple interpolation and out-of-order decoding
    are applied for the estimation of them.

47
c
  • The end
  • Thank s for your time!
Write a Comment
User Comments (0)
About PowerShow.com