Mitigating Near-field Interference in Laptop Embedded Wireless Transceivers - PowerPoint PPT Presentation

1 / 65
About This Presentation
Title:

Mitigating Near-field Interference in Laptop Embedded Wireless Transceivers

Description:

Mitigating Near-field Interference in Laptop Embedded Wireless Transceivers ... J. Haring and A.J. Han Vick, 'Iterative Decoding of Codes Over Complex Numbers ... – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 66
Provided by: lori54
Category:

less

Transcript and Presenter's Notes

Title: Mitigating Near-field Interference in Laptop Embedded Wireless Transceivers


1
Mitigating Near-field Interference in Laptop
Embedded Wireless Transceivers
Marcel Nassar(1), Kapil Gulati(1) , Arvind K.
Sujeeth(1), Navid Aghasadeghi(1), Brian L.
Evans(1), Keith R. Tinsley(2)? (1) The
University of Texas at Austin, Austin, Texas,
USA (2) System Technology Lab, Intel,
Hillsborough, Oregon, USA
2008 IEEE International Conference onAcoustics,
Speech, and Signal Processing 3rd April, 2008
2
  • Problem Definition
  • Within computing platforms, wirelesstransceivers
    experience radio frequencyinterference (RFI)
    from clocks/busses
  • PCI Express busses
  • LCD clock harmonics
  • Approach
  • Statistical modelling of RFI
  • Filtering/detection based on estimation of model
    parameters
  • Past Research
  • Potential reduction in bit error rates by factor
    of 10 or moreSpaulding Middleton, 1977

Backup
Well be using noise and interference
interchangeably
2
3
  • Computer Platform Noise Modelling
  • RFI is combination of independent radiation
    events
  • Has predominantly non-Gaussian statistics
  • Statistical-Physical Models (Middleton Class A,
    B, C)?
  • Independent of physical conditions (universal)?
  • Sum of independent Gaussian and Poisson
    interference
  • Models electromagnetic interference
  • Alpha-Stable Processes
  • Models statistical properties of impulsive
    noise
  • Approximation for Middleton Class B (broadband)
    noise

Backup
3
4
Proposed Contributions
4
5
Middleton Class A Model
Backup
Probability Density Function for A 0.15,??
0.8
Power Spectral Density for A 0.15,?? 0.8
5
6
Symmetric Alpha Stable Model
Backup
Power Spectral Density for?? 1.5, ? 0 and ?
10
Probability Density Function for ? 1.5, ? 0
and ? 10
Parameter Description Range
Characteristic Exponent. Amount of impulsiveness
Localization. Analogous to mean
Dispersion. Analogous to variance
6
7
Estimation of Noise Model Parameters
  • For Middleton Class A Model
  • Expectation maximization (EM) Zabin Poor,
    1991
  • Finds roots of second and fourth order
    polynomials at each iteration
  • Advantage Small sample size required (1000
    samples)?
  • Disadvantage Iterative algorithm, computationally
    intensive
  • For Symmetric Alpha Stable Model
  • Based on extreme order statistics Tsihrintzis
    Nikias, 1996
  • Parameter estimators require computations similar
    to mean and standard deviation.
  • Advantage Fast / computationally efficient
    (non-iterative)?
  • Disadvantage Requires large set of data samples
    ( 10,000 samples)?

Backup
Backup
7
8
Results of Measured RFI Data for Broadband Noise
Backup
Data set of 80,000 samples collected using 20
GSPS scope
Estimated Parameters Estimated Parameters
Symmetric Alpha Stable Model Symmetric Alpha Stable Model
Localization (d) 0.0043
Characteristic exp. (a) 1.2105
Dispersion (?) 0.2413
Middleton Class A Model Middleton Class A Model
Overlap Index (A) 0.1036
Gaussian Factor (G) 0.7763
Gaussian Model Gaussian Model
Mean (µ) 0
Variance (s2) 1
8
9
Filtering and Detection System Model
Alternate Adaptive Model
Impulsive Noise
  • Signal Model
  • Multiple samples/copies of the received signal
    are available
  • N path diversity Miller, 1972
  • Oversampling by N Middleton, 1977
  • Using multiple samples increases gains vs.
    Gaussian case because impulses are isolated
    events over symbol period

Backup
Decision Rule
N samples per symbol
9
10
Filtering and Detection
We assume perfect estimation of noise model
parameters
  • Class A Noise
  • Correlation Receiver (linear)?
  • Wiener Filtering (linear)?
  • Coherent Detection using MAP (Maximum A
    posteriori Probability) detector Spaulding
    Middleton, 1977
  • Small Signal Approximation to MAP
    DetectorSpaulding Middleton, 1977
  • Alpha Stable Noise
  • Correlation Receiver (linear)?
  • Myriad Filtering Gonzalez Arce, 2001
  • MAP Approximation
  • Hole Puncher

Backup
Backup
Backup
Backup
Backup
Backup
10
11
Class A Detection - Results
Pulse shapeRaised cosine10 samples per
symbol10 symbols per pulse
ChannelA 0.35? 0.5 10-3Memoryless
Method Comp. Detection Perform.
Correl. Low Low
Wiener Medium Low
Approx. Medium High
MAP High High
11
12
Alpha Stable Results
Method Comp. Detection Perform.
Hole Punching Low Medium
Selection Myriad Low Medium
MAP Approx. Medium High
Optimal Myriad High Medium
12
13
Conclusion
Class A Noise Class A Noise Class A Noise
MAP High Performance High Complexity
MAP approximation High Performance Medium Complexity
Correlation Receiver Low Performance Low Complexity
Wiener Filtering Low Performance Medium Complexity
Alpha Stable Noise Alpha Stable Noise Alpha Stable Noise
MAP Approximation High Performance Medium Complexity
Optimal Myriad Medium Performance High Complexity
Selection Myriad Medium Performance Low Complexity
Hole Puncher Medium Performance Low Complexity
T
13
14
Thank you, Questions?
15
References
1 D. Middleton, Non-Gaussian noise models in
signal processing for telecommunications New
methods and results for Class A and Class B noise
models, IEEE Trans. Info. Theory, vol. 45, no.
4, pp. 1129-1149, May 1999 2 S. M. Zabin and
H. V. Poor, Efficient estimation of Class A
noise parameters via the EM Expectation-Maximizat
ion algorithms, IEEE Trans. Info. Theory, vol.
37, no. 1, pp. 60-72, Jan. 1991 3 G. A.
Tsihrintzis and C. L. Nikias, "Fast estimation of
the parameters of alpha-stable impulsive
interference", IEEE Trans. Signal Proc., vol. 44,
Issue 6, pp. 1492-1503, Jun. 1996 4 A.
Spaulding and D. Middleton, Optimum Reception in
an Impulsive Interference Environment-Part I
Coherent Detection, IEEE Trans. Comm., vol. 25,
no. 9, Sep. 1977 5 A. Spaulding and D.
Middleton, Optimum Reception in an Impulsive
Interference Environment-Part II Incoherent
Detection, IEEE Trans. Comm., vol. 25, no. 9,
Sep. 1977 6 B. Widrow et al., Principles and
Applications, Proc. of the IEEE, vol. 63, no.12,
Sep. 1975. 7 J.G. Gonzalez and G.R. Arce,
Optimality of the Myriad Filter in Practical
Impulsive-Noise Environments, IEEE Trans. on
Signal Processing, vol 49, no. 2, Feb 2001
15
16
References (cont)?
8 S. Ambike, J. Ilow, and D. Hatzinakos,
Detection for binary transmission in a mixture
of gaussian noise and impulsive noise modeled as
an alpha-stable process, IEEE Signal Processing
Letters, vol. 1, pp. 5557, Mar. 1994. 9 J.
G. Gonzalez and G. R. Arce, Optimality of the
myriad filter in practical impulsive-noise
enviroments, IEEE Trans. on Signal Proc, vol.
49, no. 2, pp. 438441, Feb 2001. 10 E.
Kuruoglu, Signal Processing In Alpha Stable
Environments A Least Lp Approach, Ph.D.
dissertation, University of Cambridge, 1998.
11 J. Haring and A.J. Han Vick, Iterative
Decoding of Codes Over Complex Numbers for
Impuslive Noise Channels, IEEE Trans. On Info.
Theory, vol 49, no. 5, May 2003 12 G. Beenker,
T. Claasen, and P. van Gerwen, Design of
smearing filters for data transmission systems,
IEEE Trans. on Comm., vol. 33, Sept. 1985. 13
G. R. Lang, Rotational transformation of
signals, IEEE Trans. Inform. Theory, vol. IT9,
pp. 191198, July 1963. 14 Ping Gao and C.
Tepedelenlioglu. Space-time coding over mimo
channels with impulsive noise, IEEE Trans. on
Wireless Comm., 6(1)220229, January 2007. 15
K.F. McDonald and R.S. Blum. A physically-based
impulsive noise model for array observations,
Proc. IEEE Asilomar Conference on Signals,
Systems Computers, vol 1, 2-5 Nov. 1997.
16
17
BACKUP SLIDES
18
Common Spectral Occupancy
18
19
Potential Impact
  • Improve communication performance for wireless
    data communication subsystems embedded in PCs and
    laptops
  • Achieve higher bit rates for the same bit error
    rate and range, and lower bit error rates for the
    same bit rate and range
  • Extend range from wireless data communication
    subsystems to wireless access point
  • Extend results to multipleRF sources on single
    chip

19
20
Accuracy of Middleton Noise Models
Magnetic Field Strength, H (dB relative to
microamp per meter rms)?
e0 (dB gt erms)?
Percentage of Time Ordinate is Exceeded
P(e gt e0)?
Soviet high power over-the-horizon radar
interference Middleton, 1999
Fluorescent lights in mine shop office
interference Middleton, 1999
20
21
Middleton Class A, B, C Models
Class A Narrowband interference (coherent
reception) Uniquely represented
by two parameters Class B Broadband
interference (incoherent reception)
Uniquely represented by six
parameters Class C Sum of class A and class B
(approx. as class B)?
21
22
Middleton Class A Model
Probability density function (pdf)
Parameters Description Range
Overlap Index. Product of average number of emissions per second and mean duration of typical emission A ? 10-2, 1
Gaussian Factor. Ratio of second-order moment of Gaussian component to that of non-Gaussian component G ? 10-6, 1
23
Symmetric Alpha Stable Model Characteristic
function
Parameters
Characteristic exponent indicativeof thickness
of tail of impulsiveness
Localization (analogous to mean)
Dispersion (analogous to variance)
No closed-form expression for pdf except for a
1 (Cauchy), a 2 (Gaussian), a 1/2 (Levy) and
a 0 (not very useful) Could approximate pdf
using inverse transform of power series expansion
of characteristic function
24
Results of Measured RFI Data for Broadband Noise
Data set of 80,000 samples collected using 20
GSPS scope
Estimated Parameters Estimated Parameters Estimated Parameters
Symmetric Alpha Stable Model Symmetric Alpha Stable Model Symmetric Alpha Stable Model
Localization (d) 0.0043 KL Divergence 0.0514
Characteristic exp. (a) 1.2105 KL Divergence 0.0514
Dispersion (?) 0.2413 KL Divergence 0.0514
Middleton Class A Model Middleton Class A Model Middleton Class A Model
Overlap Index (A) 0.1036 KL Divergence 0.0825
Gaussian Factor (G) 0.7763 KL Divergence 0.0825
Gaussian Model Gaussian Model Gaussian Model
Mean (µ) 0 KL Divergence 0.2217
Variance (s2) 1 KL Divergence 0.2217
24
25
Coherent Detection Small Signal Approximation
Expand noise pdf pZ(z) by Taylor series about Sj
0 (j1,2)? Optimal decision rule threshold
detector for approximation Optimal detector for
approximation is logarithmic nonlinearity
followed by correlation receiver
We use 100 terms of the series expansion
ford/dxi ln pZ(xi) in simulations
Backup
25
26
Hole Punching (Blanking) Filter
  • Sets sample to 0 when sample exceeds threshold
    Ambike, 1994
  • Intuition
  • Large values are impulses and true value cannot
    be recovered
  • Replace large values with zero will not bias
    (correlation) receiver
  • If additive noise were purely Gaussian, then the
    larger the threshold, the lower the detrimental
    effect on bit error rate

26
27
Filtering and Detection Alpha Stable Model
MAP detection remove nonlinear filter Decision
rule is given by (p(.) is the SaS
distribution)? Approximations for SaS
distribution
27
28
MAP Detector PDF Approximation
  • SaS random variable Z with parameters ? , ????
    can be written Z X Y½ Kuruoglu, 1998
  • X is zero-mean Gaussian with variance 2 ?
  • Y is positive stable random variable with
    parameters depending on ?
  • Pdf of Z can be written as amixture model of N
    GaussiansKuruoglu, 1998
  • Mean ??can be added back in
  • Obtain fY(.) by taking inverse FFT of
    characteristic function normalizing
  • Number of mixtures (N) and values of sampling
    points (vi) are tunable parameters

28
29
Bit Error Rate (BER) Performance in Alpha Stable
Noise
29
30
Symmetric Alpha Stable Process PDF
Closed-form expression does not exist in
general Power series expansions can be derived in
some cases Standard symmetric alpha stable model
for localization parameter ? 0
30
31
Estimation of Middleton Class A Model Parameters
  • Expectation maximization
  • E Calculate log-likelihood function w/ current
    parameter values
  • M Find parameter set that maximizes
    log-likelihood function
  • EM estimator for Class A parameters Zabin
    Poor, 1991
  • Expresses envelope statistics as sum of weighted
    pdfs
  • Maximization step is iterative
  • Given A, maximize K (with K A G). Root
    2nd-order polynomial.
  • Given K, maximize A. Root 4th-order poly. (after
    approximation).

Backup
Backup
31
32
Estimation of Symmetric Alpha Stable Parameters
  • Based on extreme order statistics Tsihrintzis
    Nikias, 1996
  • PDFs of max and min of sequence of independently
    and identically distributed (IID) data samples
    follow
  • PDF of maximum
  • PDF of minimum
  • Extreme order statistics of Symmetric Alpha
    Stable pdf approach Frechets distribution as N
    goes to infinity
  • Parameter estimators then based on simple order
    statistics
  • Advantage Fast / computationally efficient
    (non-iterative)?
  • Disadvantage Requires large set of data samples
    (N 10,000)?

Backup
Backup
Backup
32
33
Class A Parameter Estimation Based on APD
(Exceedance Probability Density) Plot
33
34
Class A Parameter Estimation Based on Moments
Moments (as derived from the characteristic
equation)? Parameter estimates
Odd-order momentsare zeroMiddleton, 1999
2
34
35
  • Middleton Class B Model
  • Envelope Statistics
  • Envelope exceedance probability density (APD)
    which is 1 cumulative distribution function

35
36
Class B Envelope Statistics
36
37
Parameters for Middleton Class B Noise
37
38
Class B Exceedance Probability Density Plot
38
39
Expectation Maximization Overview
39
40
Maximum Likelihood for Sum of Densities
40
41
EM Estimator for Class A Parameters Using 1000
Samples
Iterations for Parameter A to Converge
PDFs with 11 summation terms 50 simulation
runs per setting
Convergence criterion Example learning
curve
41
42
Results of EM Estimator for Class A Parameters
42
43
Extreme Order Statistics
43
44
Estimator for Alpha-Stable
0 lt p lt a
44
45
Results for Symmetric Alpha Stable Parameter
Estimator
Data length (N) was 10,000 samples Results
averaged over 100 simulation runs Estimate a and
mean d directly from data Estimate variance ?
from a and d estimates Continued next slide
Mean squared error in estimate of characteristic
exponent a
45
46
Results for Symmetric Alpha Stable Parameter
Estimator
46
47
Wiener Filtering Linear Filter
Optimal in mean squared error sense when noise is
Gaussian Model Design
Minimize Mean-Squared Error E e(n)2
47
48
Wiener Filtering Finite Impulse Response (FIR)
Case
Wiener-Hopf equations for FIR Wiener filter of
order p-1 General solution in frequency domain
desired signal d(n)power spectrum ?(e j ?)
correlation of d and x rdx(n)autocorrelation
of x rx(n)Wiener FIR Filter w(n) corrupted
signal x(n)noise z(n)?
48
49
Wiener Filtering 100-tap FIR Filter
Pulse shape10 samples per symbol10 symbols per
pulse
ChannelA 0.35? 0.5 10-3SNR -10
dBMemoryless
49
50
Incoherent Detection Bayes formulation
Spaulding Middleton, 1997, pt. II
Small signal approximation
50
51
Incoherent Detection Optimal Structure
Incoherent Correlation Detector
The optimal detector for the small signal
approximation is basically the correlation
receiver preceded by the logarithmic nonlinearity.
51
52
Coherent Detection Class A Noise
Comparison of performance of correlation receiver
(Gaussian optimal receiver) and nonlinear
detector Spaulding Middleton, 1997, pt. II
52
53
Coherent Detection Small Signal Approximation
Near-optimal for small amplitude
signals Suboptimal for higher amplitude signals
AntipodalA 0.35? 0.510-3
Communication performance of approximation vs.
upper boundSpaulding Middleton, 1977, pt. I
53
54
Volterra Filters
Non-linear (in the signal) polynomial filter
By Stone-Weierstrass Theorem, Volterra signal
expansion can model many non-linear systems, to
an arbitrary degree of accuracy. (Similar to
Taylor expansion with memory). Has symmetry
structure that simplifies computational
complexity Np (Np-1) C p instead of Np. Thus
for N8 and p8 Np16777216 and (Np-1) C p
6435.
54
55
Adaptive Noise Cancellation
Computational platform contains multiple antennas
that can provide additional information regarding
the noise Adaptive noise canceling methods use an
additional reference signal that is correlated
with corrupting noise
s signalsn0 corrupted signaln0 noisen1
reference inputz system output
55
56
Harings Receiver Simulation Results
56
57
Coherent Detection in Class A Noise with G 10-4
A 0.1
Correlation Receiver Performance
SNR (dB)?
SNR (dB)?
57
58
Myriad Filtering
Myriad Filters exhibit high statistical
efficiency in bell-shaped impulsive distributions
like the SaS distributions. Have been used as
both edge enhancers and smoothers in image
processing applications. In the communication
domain, they have been used to estimate a sent
number over a channel using a known pulse
corrupted by additive noise. (Gonzalez 1996)? In
this work, we used a sliding window version of
the myriad filter to mitigate the impulsiveness
of the additive noise. (Nassar et. al 2007)?
58
59
MAP Detection
Decision Rule ?(X)?
corrupted signal
Hard decision Bayesian formulation Spaulding and
Middleton, 1977
H1 or H2
Equally probable source
59
60
Results
60
61
MAP Detector PDF Approximation
  • SaS random variable Z with parameters a , d, g
    can be written Z X Y½ Kuruoglu, 1998
  • X is zero-mean Gaussian with variance 2 g
  • Y is positive stable random variable with
    parameters depending on a
  • Pdf of Z can be written as amixture model of N
    GaussiansKuruoglu, 1998
  • Mean d can be added back in
  • Obtain fY(.) by taking inverse FFT of
    characteristic function normalizing
  • Number of mixtures (N) and values of sampling
    points (vi) are tunable parameters

62
Myriad Filtering
  • Sliding window algorithm
  • Outputs myriad of sample window
  • Myriad of order k for samples x1, x2, , xN
    Gonzalez Arce, 2001
  • As k decreases, less impulsive noise gets through
    myriad filter
  • As k?0, filter tends to mode filter (output value
    with highest freq.)
  • Empirical choice of k
    Gonzalez Arce, 2001

63
Myriad Filtering Implementation
  • Given a window of samples x1,,xN, find ß ?
    xmin, xmax
  • Optimal myriad algorithm
  • Differentiate objective functionpolynomial p(ß)
    with respect to ß
  • Find roots and retain real roots
  • Evaluate p(ß) at real roots and extremum
  • Output ß that gives smallest value of p(ß)
  • Selection myriad (reduced complexity)
  • Use x1,,xN as the possible values of ß
  • Pick value that minimizes objective function p(ß)

Backup
64
Hole Punching (Blanking) Filter
  • Sets sample to 0 when sample exceeds threshold
    Ambike, 1994
  • Intuition
  • Large values are impulses and true value cannot
    be recovered
  • Replace large values with zero will not bias
    (correlation) receiver
  • If additive noise were purely Gaussian, then the
    larger the threshold, the lower the detrimental
    effect on bit error rate

65
Complexity Analysis
Method Complexity per symbol Analysis
Hole Puncher Correlation Receiver O(NS) A decision needs to be made about each sample.
Optimal Myriad Correlation Receiver O(NW3S) Due to polynomial rooting which is equivalent to Eigen-value decomposition.
Selection Myriad Correlation Receiver O(NW2S) Evaluation of the myriad function and comparing it.
MAP Approximation O(MNS) Evaluating approximate pdf(M is number of Gaussians in mixture)
N is oversampling factor S is constellation
size W is window size
Write a Comment
User Comments (0)
About PowerShow.com