Natural gradient learning for nonstationary source separation - PowerPoint PPT Presentation

1 / 16
About This Presentation
Title:

Natural gradient learning for nonstationary source separation

Description:

Fully-connected recurrent network & feedforward network ... The stationary points of algorithms are stable if the Hessian is positive. POSTECH IM lab ... – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 17
Provided by: IM189
Category:

less

Transcript and Presenter's Notes

Title: Natural gradient learning for nonstationary source separation


1
Natural gradient learning for nonstationary
source separation
  • Heeyoul Choi, Seungjin Choi,
  • Andrzej Cichocki, Shunichi Amari
  • hychoi,seungjin_at_postech.ac.kr
  • cia, amari_at_brain.riken.go.jp

2
Contents
  • Introduction
  • Problem Formulation
  • Model Assumptions
  • Cost Function
  • Learning Algorithms
  • Local Stability Analysis
  • Simulation Results
  • Conclusion

3
Introduction (Source Separation)
  • With Higher-Order Statistics(HOS)
  • Stationary sources
  • With Second-Order Statistics(SOS)
  • Spatially uncorrelated but temporally correlated
  • Nonstationary sources
  • Linear feedback neural network
  • Simple learning algorithm
  • Fully-connected recurrent network feedforward
    network
  • New algorithm using the natural gradient method

4
Problem Formulation
  • x(t) A s(t)
  • x observed data
  • A mixing matrix
  • S source data
  • y(t) W x(t) P A s(t) P D s(t)
  • y demixed output data
  • P Permutation Matrix
  • D Scaling matrix (diagonal matrix)

5
Model Assumptions
  • AS1 The mixing matrix A has full column rank
  • AS2 Source signals si(t) are statistically
    independent with zero mean.
  • AS3 ri(t)/rj(t) (i ? j) are not constant with
    time.

6
Cost Function
  • For nonstationary sources, decomposition is
    satisfied if cross-correlation (i
    ? j) are zeros.
  • To eliminate cross-correlations,
  • The direct consequence of the Hadamards
    inequality

7
Learning Algorithms
  • Matsuoka-Ohya-Kawamoto Algorithm
  • Natural Gradient Based Algorithms
  • In Euclidean space, the conventional gradient
    gives the steepest descent direction
  • In Riemannian space, the natural gradient.
  • Fully-connected Recurrent Network
  • Fully-connected Feedforward Network

8
Learning Algorithms(1)
  • Matsuoka-Ohya-Kawamoto Algorithm

x1(t)
y1(t)
w21
w12
x2(t)
y2(t)
9
Learning Algorithms(2)
  • Fully-connected Recurrent Network

w11
x1(t)
y1(t)
w21
w12
x2(t)
y2(t)
w22
10
Learning Algorithms(3)
  • Fully-connected Feedforward Network

x1(t)
y1(t)
w11
w21
w12
x2(t)
y2(t)
w22
11
Local Stability Analysis
  • In nonstationary algorithms, the stationary
    points satisfy
  • The stationary points of algorithms are stable if
    the Hessian is positive

12
Local Stability Analysis
  • The Second term is always positive
  • The First term can be rewritten as
  • Hence is always positive
  • Our algorithms is always locally stable
    regardless of the prob. distributions of sources

13
Simulation Result
  • N 2, well conditioned mixing
  • All three algorithms were successful
  • N 3, well conditioned mixing
  • Our proposed 2 algorithms outperformed the first
    algorithm
  • N3, ill conditioned mixing
  • First algorithm failed to separate mixtures,
    whereas the other 2 algorithms were still
    successful.

14
Simulation Result (PI)
  • Simulation 2

15
Simulation Result (SIRI)
  • Simulation 2

16
Conclusions
  • The proposed algorithms were derived in the
    framework of the natural gradient.
  • They are efficient and possess the equivariant
    property
  • They are locally stable
  • They outperformed the existing algorithm
  • The extension of the proposed method to
    convolutive mixtures is under investigation
Write a Comment
User Comments (0)
About PowerShow.com