Neural networks - PowerPoint PPT Presentation

1 / 91
About This Presentation
Title:

Neural networks

Description:

Interactive neural-network demonstrations. Perceptron. Multilayer perceptron ... Sigmoid function. May also be the tanh function ( -1, 1 instead of 0,1 ... – PowerPoint PPT presentation

Number of Views:107
Avg rating:3.0/5.0
Slides: 92
Provided by: fda017
Category:

less

Transcript and Presenter's Notes

Title: Neural networks


1
Neural networks
  • Eric Postma
  • IKAT
  • Universiteit Maastricht

2
Overview
  • Introduction The biology of neural networks
  • the biological computer
  • brain-inspired models
  • basic notions
  • Interactive neural-network demonstrations
  • Perceptron
  • Multilayer perceptron
  • Kohonens self-organising feature map
  • Examples of applications

3
A typical AI agent
4
Two types of learning
  • Supervised learning
  • curve fitting, surface fitting, ...
  • Unsupervised learning
  • clustering, visualisation...

5
An input-output function
6
Fitting a surface to four points
7
(Artificial) neural networks
  • The digital computer versus
  • the neural computer

8
The Von Neumann architecture
9
The biological architecture
10
Digital versus biological computers
  • 5 distinguishing properties
  • speed
  • robustness
  • flexibility
  • adaptivity
  • context-sensitivity

11
Speed The hundred time steps argument
  • The critical resource that is most obvious is
    time. Neurons whose basic computational speed is
    a few milliseconds must be made to account for
    complex behaviors which are carried out in a few
    hudred milliseconds (Posner, 1978). This means
    that entire complex behaviors are carried out in
    less than a hundred time steps.
  • Feldman and Ballard (1982)

12
Graceful Degradation
performance
damage
13
Flexibility the Necker cube
14
vision constraint satisfaction
15
Adaptivitiy
processing implies learning in biological
computers versus processing does not imply
learning in digital computers
16
Context-sensitivity patterns
emergent properties
17
Robustness and context-sensitivitycoping with
noise
18
The neural computer
  • Is it possible to develop a model after the
    natural example?
  • Brain-inspired models
  • models based on a restricted set of structural en
    functional properties of the (human) brain

19
The Neural Computer (structure)
20
Neurons, the building blocks of the brain
21
Neural activity
out
in
22
Synapses,the basis of learning and memory
23
Learning Hebbs rule
neuron 1
synapse
neuron 2
24
Connectivity
  • An example
  • The visual system is a feedforward hierarchy of
    neural modules
  • Every module is (to a certain extent)
    responsible for a certain function

25
(Artificial) Neural Networks
  • Neurons
  • activity
  • nonlinear input-output function
  • Connections
  • weight
  • Learning
  • supervised
  • unsupervised

26
Artificial Neurons
  • input (vectors)
  • summation (excitation)
  • output (activation)

i1
a f(e)
i2
e
i3
27
Input-output function
  • nonlinear function

a ? 0
f(e)
a ? ?
e
28
Artificial Connections (Synapses)
  • wAB
  • The weight of the connection from neuron A to
    neuron B

29
The Perceptron
30
Learning in the Perceptron
  • Delta learning rule
  • the difference between the desired output tand
    the actual output o, given input x
  • Global error E
  • is a function of the differences between the
    desired and actual outputs

31
Gradient Descent
32
Linear decision boundaries
33
The history of the Perceptron
  • Rosenblatt (1959)
  • Minsky Papert (1961)
  • Rumelhart McClelland (1986)

34
The multilayer perceptron
input
hidden
output
35
Training the MLP
  • supervised learning
  • each training pattern input desired output
  • in each epoch present all patterns
  • at each presentation adapt weights
  • after many epochs convergence to a local minimum

36
phoneme recognition with a MLP
Output pronunciation
input frequencies
37
Non-linear decision boundaries
38
Compression with an MLPthe autoencoder
39
hidden representation
40
Learning in the MLP
41
Preventing Overfitting
  • GENERALISATION performance on test set
  • Early stopping
  • Training, Test, and Validation set
  • k-fold cross validation
  • leaving-one-out procedure

42
Image Recognition with the MLP
43
(No Transcript)
44
Hidden Representations
45
Other Applications
  • Practical
  • OCR
  • financial time series
  • fraud detection
  • process control
  • marketing
  • speech recognition
  • Theoretical
  • cognitive modeling
  • biological modeling

46
Some mathematics
47
Perceptron
48
Derivation of the delta learning rule
Target output
Actual output
h i
49
MLP
50
Sigmoid function
  • May also be the tanh function
  • (lt-1,1gt instead of lt0,1gt)
  • Derivative f(x) f(x) 1 f(x)

51
Derivation generalized delta rule
52
Error function (LMS)
53
Adaptation hidden-output weights
54
Adaptation input-hidden weights
55
Forward and Backward Propagation
56
Decision boundaries of Perceptrons
Straight lines (surfaces), linear separable
57
Decision boundaries of MLPs
Convex areas (open or closed)
58
Decision boundaries of MLPs
Combinations of convex areas
59
Learning and representing similarity
60
Alternative conception of neurons
  • Neurons do not take the weighted sum of their
    inputs (as in the perceptron), but measure the
    similarity of the weight vector to the input
    vector
  • The activation of the neuron is a measure of
    similarity. The more similar the weight is to the
    input, the higher the activation
  • Neurons represent prototypes

61
Course Coding
62
2nd order isomorphism
63
Prototypes for preprocessing
64
Kohonens SOFM(Self Organizing Feature Map)
  • Unsupervised learning
  • Competitive learning

winner
output
input (n-dimensional)
65
Competitive learning
  • Determine the winner (the neuron of which the
    weight vector has the smallest distance to the
    input vector)
  • Move the weight vector w of the winning neuron
    towards the input i

66
Kohonens idea
  • Impose a topological order onto the competitive
    neurons (e.g., rectangular map)
  • Let neighbours of the winner share the prize
    (The postcode lottery principle.)
  • After learning, neurons with similar weights tend
    to cluster on the map

67
  • Topological order
  • neighbourhoods
  • Square
  • winner (red)
  • Nearest neighbours
  • Hexagonal
  • Winner (red)
  • Nearest neighbours

68
A simple example
  • A topological map of 2 x 3 neurons and two inputs

visualisation
input
weights
69
Weights before training
70
Input patterns (note the 2D distribution)
71
Weights after training
72
Another example
  • Input uniformly randomly distributed points
  • Output Map of 202 neurons
  • Training
  • Starting with a large learning rate and
    neighbourhood size, both are gradually decreased
    to facilitate convergence

73
(No Transcript)
74
Dimension reduction
75
Adaptive resolution
76
Application of SOFM
Examples (input)
SOFM after training (output)
77
Visual features (biologically plausible)
78
Relation with statistical methods 1
  • Principal Components Analysis (PCA)

Projections of data
pca1
pca2
79
Relation with statistical methods 2
  • Multi-Dimensional Scaling (MDS)
  • Sammon Mapping

80
Image Miningthe right feature
81
Fractal dimension in art
Jackson Pollock (Jack the Dripper)
82
Taylor, Micolich, and Jonas (1999). Fractal
Analysis of Pollocks drip paintings. Nature,
399, 422. (3 june).
Range for natural images
83
Our Van Gogh research
  • Two painters
  • Vincent Van Gogh paints Van Gogh
  • Claude-Emile Schuffenecker paints Van Gogh

84
Sunflowers
  • Is it made by
  • Van Gogh?
  • Schuffenecker?

85
Approach
  • Select appropriate features (skipped here, but
    very important!)
  • Apply neural networks

86
(No Transcript)
87
Training Data
  • Van Gogh (5000 textures)

Schuffenecker (5000 textures)
88
Results
  • Generalisation performance
  • 96 correct classification on untrained data

89
Resultats, cont.
  • Trained art-expert network applied to Yasuda
    sunflowers
  • 89 of the textures is geclassificeerd as a
    genuine Van Gogh

90
A major caveat
  • Not only the painters are different
  • but also the materialand maybe many other
    things

91
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com