Title: Introduction to Radial Basis Function Networks
1Introduction to Radial Basis Function Networks
2Content
- Overview
- The Models of Function Approximator
- The Radial Basis Function Networks
- RBFNs for Function Approximation
- The Projection Matrix
- Learning the Kernels
- Bias-Variance Dilemma
- Model Selection
3Introduction to Radial Basis Function Networks
4Typical Applications of NN
- Pattern Classification
- Function Approximation
- Time-Series Forecasting
5Function Approximation
Unknown
Approximator
6Supervised Learning
Unknown Function
Neural Network
7Neural Networks as Universal Approximators
- Feedforward neural networks with a single hidden
layer of sigmoidal units are capable of
approximating uniformly any continuous
multivariate function, to any desired degree of
accuracy. - Hornik, K., Stinchcombe, M., and White, H.
(1989). "Multilayer Feedforward Networks are
Universal Approximators," Neural Networks, 2(5),
359-366. - Like feedforward neural networks with a single
hidden layer of sigmoidal units, it can be shown
that RBF networks are universal approximators. - Park, J. and Sandberg, I. W. (1991). "Universal
Approximation Using Radial-Basis-Function
Networks," Neural Computation, 3(2), 246-257. - Park, J. and Sandberg, I. W. (1993).
"Approximation and Radial-Basis-Function
Networks," Neural Computation, 5(2), 305-316.
8Statistics vs. Neural Networks
Statistics Neural Networks
model network
estimation learning
regression supervised learning
interpolation generalization
observations training set
parameters (synaptic) weights
independent variables inputs
dependent variables outputs
ridge regression weight decay
9Introduction to Radial Basis Function Networks
- The Model of
- Function Approximator
10Linear Models
Weights
Fixed Basis Functions
11Linear Models
Linearly weighted output
Output Units
- Decomposition
- Feature Extraction
- Transformation
Hidden Units
Inputs
Feature Vectors
12Linear Models
Can you say some bases?
y
Linearly weighted output
Output Units
w2
w1
wm
- Decomposition
- Feature Extraction
- Transformation
Hidden Units
?1
?2
?m
Inputs
Feature Vectors
x1
x2
xn
x
13Example Linear Models
Are they orthogonal bases?
- Polynomial
- Fourier Series
14Single-Layer Perceptrons as Universal
Aproximators
With sufficient number of sigmoidal units, it can
be a universal approximator.
Hidden Units
15Radial Basis Function Networks as Universal
Aproximators
With sufficient number of radial-basis-function
units, it can also be a universal approximator.
Hidden Units
16Non-Linear Models
Weights
Adjusted by the Learning process
17Introduction to Radial Basis Function Networks
- The Radial Basis Function Networks
18Radial Basis Functions
Three parameters for a radial function
?i(x)? (x ? xi)
xi
- Center
- Distance Measure
- Shape
r x ? xi
?
19Typical Radial Functions
- Gaussian
- Hardy Multiquadratic
- Inverse Multiquadratic
20Gaussian Basis Function (?0.5,1.0,1.5)
21Inverse Multiquadratic
c5
c4
c3
c2
c1
22Most General RBF
Basis ?i i 1,2, is near orthogonal.
23Properties of RBFs
- On-Center, Off Surround
- Analogies with localized receptive fields found
in several biological structures, e.g., - visual cortex
- ganglion cells
24The Topology of RBF
As a function approximator
Output Units
Interpolation
Hidden Units
Projection
Feature Vectors
Inputs
25The Topology of RBF
As a pattern classifier.
Output Units
Classes
Hidden Units
Subclasses
Feature Vectors
Inputs
26Introduction to Radial Basis Function Networks
- RBFNs for
- Function Approximation
27The idea
y
x
28The idea
y
x
29The idea
y
x
30The idea
y
x
31The idea
y
x
32Radial Basis Function Networks as Universal
Aproximators
Training set
Goal
for all k
33Learn the Optimal Weight Vector
Training set
Goal
for all k
34Regularization
Training set
If regularization is unneeded, set
Goal
for all k
35Learn the Optimal Weight Vector
Minimize
36Learn the Optimal Weight Vector
Define
37Learn the Optimal Weight Vector
Define
38Learn the Optimal Weight Vector
39Learn the Optimal Weight Vector
Design Matrix
Variance Matrix
40Summary
Training set
41Introduction to Radial Basis Function Networks
42The Empirical-Error Vector
43The Empirical-Error Vector
Error Vector
44Sum-Squared-Error
If ?0, the RBFNs learning algorithm is to
minimize SSE (MSE).
Error Vector
45The Projection Matrix
Error Vector
46Introduction to Radial Basis Function Networks
47RBFNs as Universal Approximators
Training set
Kernels
48What to Learn?
- Weights wijs
- Centers ?js of ?js
- Widths ?js of ?js
- Number of ?js ? Model Selection
49One-Stage Learning
50One-Stage Learning
The simultaneous updates of all three sets of
parameters may be suitable for non-stationary
environments or on-line setting.
51Two-Stage Training
Step 2
Determines wijs.
E.g., using batch-learning.
Step 1
- Determines
- Centers ?js of ?js.
- Widths ?js of ?js.
- Number of ?js.
52Train the Kernels
53Unsupervised Training
54Methods
- Subset Selection
- Random Subset Selection
- Forward Selection
- Backward Elimination
- Clustering Algorithms
- KMEANS
- LVQ
- Mixture Models
- GMM
55Subset Selection
56Random Subset Selection
- Randomly choosing a subset of points from
training set - Sensitive to the initially chosen points.
- Using some adaptive techniques to tune
- Centers
- Widths
- points
57Clustering Algorithms
Partition the data points into K clusters.
58Clustering Algorithms
Is such a partition satisfactory?
59Clustering Algorithms
How about this?
60Clustering Algorithms
?1
?2
?4
?3
61Introduction to Radial Basis Function Networks
62Goal Revisit
- Ultimate Goal ? Generalization
Minimize Prediction Error
- Goal of Our Learning Procedure
Minimize Empirical Error
63Badness of Fit
- Underfitting
- A model (e.g., network) that is not sufficiently
complex can fail to detect fully the signal in a
complicated data set, leading to underfitting. - Produces excessive bias in the outputs.
- Overfitting
- A model (e.g., network) that is too complex may
fit the noise, not just the signal, leading to
overfitting. - Produces excessive variance in the outputs.
64Underfitting/Overfitting Avoidance
- Model selection
- Jittering
- Early stopping
- Weight decay
- Regularization
- Ridge Regression
- Bayesian learning
- Combining networks
65Best Way to Avoid Overfitting
- Use lots of training data, e.g.,
- 30 times as many training cases as there are
weights in the network. - for noise-free data, 5 times as many training
cases as weights may be sufficient. - Dont arbitrarily reduce the number of weights
for fear of underfitting.
66Badness of Fit
Underfit
Overfit
67Badness of Fit
Underfit
Overfit
68Bias-Variance Dilemma
However, it's not really a dilemma.
Underfit
Overfit
Large bias
Small bias
Small variance
Large variance
69Bias-Variance Dilemma
- More on overfitting
- Easily lead to predictions that are far beyond
the range of the training data. - Produce wild predictions in multilayer
perceptrons even with noise-free data.
70Bias-Variance Dilemma
However, it's not really a dilemma.
71Bias-Variance Dilemma
The mean of the bias?
The variance of the bias?
The true model
bias
bias
bias
E.g., depend on hidden nodes used.
72Bias-Variance Dilemma
The mean of the bias?
The variance of the bias?
Variance
The true model
E.g., depend on hidden nodes used.
73Model Selection
Reduce the effective number of parameters.
Reduce the number of hidden nodes.
Variance
The true model
E.g., depend on hidden nodes used.
74Bias-Variance Dilemma
Goal
The true model
E.g., depend on hidden nodes used.
75Bias-Variance Dilemma
Goal
Goal
0
constant
76Bias-Variance Dilemma
0
77Bias-Variance Dilemma
Goal
bias2
variance
Minimize both bias2 and variance
noise
Cannot be minimized
78Model Complexity vs. Bias-Variance
Goal
bias2
variance
noise
Model Complexity (Capacity)
79Bias-Variance Dilemma
Goal
bias2
variance
noise
80Example (Polynomial Fits)
81Example (Polynomial Fits)
82Example (Polynomial Fits)
Degree 1
Degree 5
Degree 10
Degree 15
83Introduction to Radial Basis Function Networks
84Model Selection
- Goal
- Choose the fittest model
- Criteria
- Least prediction error
- Main Tools (Estimate Model Fitness)
- Cross validation
- Projection matrix
85Empirical Error vs. Model Fitness
- Ultimate Goal ? Generalization
Minimize Prediction Error
- Goal of Our Learning Procedure
Minimize Empirical Error
(MSE)
Minimize Prediction Error
86Estimating Prediction Error
- When you have plenty of data use independent test
sets - E.g., use the same training set to train
different models, and Choose the best model by
comparing on the test set. - When data is scarce, use
- Cross-Validation
- Bootstrap
87Cross Validation
- Simplest and most widely used method for
estimating prediction error. - Partition the original set into several different
ways and to compute an average score over the
different partitions, e.g., - K-fold Cross-Validation
- Leave-One-Out Cross-Validation
88K-Fold CV
- Split the set, say, D of available input-output
patterns into k mutually exclusive subsets, say
D1, D2, , Dk. - Train and test the learning algorithm k times,
each time it is trained on D\Di and tested on Di.
89Leave-One-Out CV
A special case of k-fold CV.
- Split the p available input-output patterns into
a training set of size p?1 and a test set of size
1. - Average the squared error on the left-out pattern
over the p possible ways of partition.
90Error Variance Predicted by LOO
A special case of k-fold CV.
The estimate for the variance of prediction error
using LOO
Error-square for the left-out element.
91Error Variance Predicted by LOO
A special case of k-fold CV.
Given a model, the function with least empirical
error for Di.
As an index of models fitness. We want to find a
model also minimize this.
The estimate for the variance of prediction error
using LOO
Error-square for the left-out element.
92Error Variance Predicted by LOO
A special case of k-fold CV.
Are there any efficient ways?
How to estimate?
The estimate for the variance of prediction error
using LOO
Error-square for the left-out element.
93Error Variance Predicted by LOO
Error-square for the left-out element.
94References
Kohavi, R. (1995), "A study of cross-validation
and bootstrap for accuracy estimation and model
selection," International Joint Conference on
Artificial Intelligence (IJCAI).