Title: An Introduction to Neural Networks
1An Introduction to Neural Networks
- Presented by Greg Eustace
- For MUMT 611
2Overview
- Introduction to artificial neural networks
- Selected history
- Biological neural networks
- Structure of neuron
- Transfer functions
- Characterising neural networks
- The learning process
- Applications
3Introduction to Neural Networks
- Artificial neural network (ANN) A series of
interlinked processing elements which function
analogously to a biological neural networks. - Algorithms Rule-based vs. machine learning
methods - The training stage
4Selected history
- 1943 The first artificial neuron was produced
by McCulloch and Pits. - 1958 Rosenblatt developed the perceptron.
- 1969 Minsky and Papert publish Perceptrons
discussing the limitations of multilayer
perceptrons. Drastic funding cuts result. - 1980s resurgence of interest in ANNs.
5Biological Neural Networks
- Three basic components of the biological neuron
are the cell body, the axon and dendrites. - Axons carry electrical impulses received by
dendrites. - The gap between an axon and a dendrite is called
the synapse. - Two types of synapses are excitatory and
inhibitory. - If excitatory energy gt inhibitory energy, the
neuron fires. - The neurons output its firing frequency.
Fig. 1. Biological neuron (Mehrotra, Mohan and
Ranka 1997)
6Structure of Neural Networks
- A node (or neuron) consists of any number of
inputs and a single output, where the output is
some function of the inputs. - Input x1, x2, x3, xn
- Weight w1, w2, w3, wn
- Output f (w1x1, w2x2 wnxn)
- Weights represent synaptic efficiency (i.e., the
effect of a given input on the output). - Nodes are linked to form networks. Links
represent synaptic connections.
Fig. 2. General Neuron Model (Mehrotra, Mohan and
Ranka 1997)
7Transfer Functions
- The output of a node (or network) is determined
by a transfer function. - Common types Step, ramp and sigmoid functions.
- The step function
- f(net) a, if lt c
- b, if gt c
- Where net w1x1 w2x2 wnxn
Fig. 3. Step function (Mehrotra, Mohan and Ranka
1997)
8Transfer Functions
- Ramp Functions
- f(net) a,
if lt c - b,
if gt c - a (net c) (b a)/ (d c),
otherwise
Fig. 4. Ramp function (Mehrotra, Mohan and Ranka
1997)
9Transfer Functions
- Sigmoid functions continuous, every-where
differentiable, rotationally symmetric and
asymptotic. -
Fig. 5. Sigmoid function (Mehrotra, Mohan and
Ranka 1997)
10Characterising ANNs
- Single vs. multi-layer networks
- Types of layers input, output and hidden layers
Fig. 6. Multi-layer network (Mehrotra, Mohan and
Ranka 1997)
11Characterising ANNs
- Fully connected networks
- Acyclic networks
- Feedforward networks (i.e., multi-layer
perceptrons). - Feedback networks
Fig. 7. Feedforward Neural Network (Mehrotra,
Mohan and Ranka 1997)
12The learning process
- Learning is the process of adjusting the weights
between nodes to obtain a desired output. - Supervised learning
- Perceptrons a machine that classifies inputs
according to a linear function. - Unsupervised learning
- Correlation (or Hebbian) learning
- Competitive learning.
- Learning algorithms
- ADALINES (use LSE)
- Backpropagation
13Applications
- Classification
- knowledge of the class structure is pre-existing
- Genre, melody, rhythm, timbre gesture
classification. - Clustering
- Pattern recognition
- Two types auto-association hetero-association
- Auto-association the input pattern is assumed to
be a corrupted form of the desired output. - Hetero-association the input pattern is
associated with an arbitrary output pattern. - Audio restoration (e.g., detecting clicks and
scratches in vinyl). - Biofeedback (e.g., gesture to speech translation
as in Glove-talkII) - OCR (e.g., recognition of note head and stems in
printed scores). - note onset detection
14Applications
- Function approximation
- Developing perceptual models (?)
- Forecasting
- Algorithmic composition (success stories?)
- Optimisation
15Reference
- Mehrotra, K., C. Mohan, and S. Ranka. 1997.
Elements of Artificial Neural Networks.
Massachusetts The MIT Press.