Title: Predictive Coding of Correlated Sources
1Predictive Coding ofCorrelated Sources
- Ertem Tuncel
- University of California, Riverside
- September 29, 2004
2Coding of Correlated Sources
- Sensors are preferably cheap, and therefore
low-power. - It is more efficient if they directly
communicate with the center, rather than with
each other. - On the other hand, their measurements are not
independent. - Can we make use of that correlation?
RIVERSIDE
3Sample Waveforms
xn
yn
n
4Underlying Model
xn
S
observation noise
A(z)
AR process
white noise
yn
S
observation noise
5Example Used in This Work
- First-order Gauss-Markov process
6The Separate Coding Regime
xn
Encoder
Decoder
yn
Encoder
7Distributed VQ
- Collect N samples and encode jointly
xn
yn
n
8Disadvantages
- Though optimal, distributed VQ is problematic in
the following aspects - Prevalence of local minima
- Exacerbated by the required binning structure.
- High design and codebook complexities
- N?2N(RXRY) scalars to be designed and stored.
- Encoding complexity
- Optimal encoders are not based on
nearest-neighbor mapping
9Outline of the Rest of the Talk
- A family of scalar quantization schemes based on
a simple binning technique. - Predictive coding
- How to optimally exploit inter- and
intra-correlation. - Experimental results
- Suboptimality of the traditional prediction
filter. - Conclusion and future work
10Scalar Quantization
11A Simple Binning Technique
Pair encoded as (0,2)
12A Family of Codes
- Choose three parameters W, NX, and NY.
- Uniformly quantize X and Y using WNXNY intervals
each. Enumerate the intervals from 0 to WNXNY ?1.
13Meaning of the Parameters
14Choosing the Parameters
Fix NX and NY. W controls the rate-distortion
point
15Choosing the Parameters
16High-resolution R-D Performance
FACT To minimize DX DY for a fixed total bit
budget, one must spend additional bits to split
each cell further so that DX DY .
17An Optimality Statement
- Consider the hypothetical scenario where xn and
yn are both available at any encoder. - The encoder can then apply KLT and de-correlate
the two sources, followed by optimal bit
allocation under the uniform quantization regime. - Gives the same asymptotic performance!!!
18Predictive Coding
19The Prediction Filter
- Two extremes
- No prediction. It can be shown that in our
example process, this maximizes the correlation,
and hence the efficiency of binning. - Traditional prediction, i.e., to minimize the
prediction error variances for both xn and yn.
20Prediction Filter Design
- Instead of fixing the prediction filters to
minimize , which relies on
the accuracy of - we more accurately (and safely) choose for
each W and r the maximum NXNY so that - and minimize over filter coefficients.
21Experimental Results
22Experimental Results
23Experimental Results
24Discussion
- For , i.e., when the
SNR of the observed signals is 30dB, the observed
sequences are themselves almost first-order
Gauss-Markov. - Yet, the optimal prediction is not first-order.
More specifically, we observe up to 1.15dB
improvement over the traditional prediction
filter when we apply second-order prediction.
25Effect of an Out-of-synch Codec
26Conclusions and Future Work
- Preliminary experiments show the advantage of
non-traditional prediction. - Non-uniform quantizer design and a corresponding
high-resolution analysis is the future direction. - How do we optimally compand scalars separately?
- Is uniform quantization optimal under entropy
coding in this regime?