An Introduction to Neural Networks - PowerPoint PPT Presentation

About This Presentation
Title:

An Introduction to Neural Networks

Description:

Sigmoid functions: continuous, every-where differentiable, rotationally ... Fig. 5. Sigmoid function (Mehrotra, Mohan and Ranka 1997) Characterising ANNs ... – PowerPoint PPT presentation

Number of Views:135
Avg rating:3.0/5.0
Slides: 16
Provided by: Gre9198
Category:

less

Transcript and Presenter's Notes

Title: An Introduction to Neural Networks


1
An Introduction to Neural Networks
  • Presented by Greg Eustace
  • For MUMT 611

2
Overview
  • Introduction to artificial neural networks
  • Selected history
  • Biological neural networks
  • Structure of neuron
  • Transfer functions
  • Characterising neural networks
  • The learning process
  • Applications

3
Introduction to Neural Networks
  • Artificial neural network (ANN) A series of
    interlinked processing elements which function
    analogously to a biological neural networks.
  • Algorithms Rule-based vs. machine learning
    methods
  • The training stage

4
Selected history
  • 1943 The first artificial neuron was produced
    by McCulloch and Pits.
  • 1958 Rosenblatt developed the perceptron.
  • 1969 Minsky and Papert publish Perceptrons
    discussing the limitations of multilayer
    perceptrons. Drastic funding cuts result.
  • 1980s resurgence of interest in ANNs.

5
Biological Neural Networks
  • Three basic components of the biological neuron
    are the cell body, the axon and dendrites.
  • Axons carry electrical impulses received by
    dendrites.
  • The gap between an axon and a dendrite is called
    the synapse.
  • Two types of synapses are excitatory and
    inhibitory.
  • If excitatory energy gt inhibitory energy, the
    neuron fires.
  • The neurons output its firing frequency.

Fig. 1. Biological neuron (Mehrotra, Mohan and
Ranka 1997)
6
Structure of Neural Networks
  • A node (or neuron) consists of any number of
    inputs and a single output, where the output is
    some function of the inputs.
  • Input x1, x2, x3, xn
  • Weight w1, w2, w3, wn
  • Output f (w1x1, w2x2 wnxn)
  • Weights represent synaptic efficiency (i.e., the
    effect of a given input on the output).
  • Nodes are linked to form networks. Links
    represent synaptic connections.

Fig. 2. General Neuron Model (Mehrotra, Mohan and
Ranka 1997)
7
Transfer Functions
  • The output of a node (or network) is determined
    by a transfer function.
  • Common types Step, ramp and sigmoid functions.
  • The step function
  • f(net) a, if lt c
  • b, if gt c
  • Where net w1x1 w2x2 wnxn

Fig. 3. Step function (Mehrotra, Mohan and Ranka
1997)
8
Transfer Functions
  • Ramp Functions
  • f(net) a,
    if lt c
  • b,
    if gt c
  • a (net c) (b a)/ (d c),
    otherwise

Fig. 4. Ramp function (Mehrotra, Mohan and Ranka
1997)
9
Transfer Functions
  • Sigmoid functions continuous, every-where
    differentiable, rotationally symmetric and
    asymptotic.

Fig. 5. Sigmoid function (Mehrotra, Mohan and
Ranka 1997)
10
Characterising ANNs
  • Single vs. multi-layer networks
  • Types of layers input, output and hidden layers

Fig. 6. Multi-layer network (Mehrotra, Mohan and
Ranka 1997)
11
Characterising ANNs
  • Fully connected networks
  • Acyclic networks
  • Feedforward networks (i.e., multi-layer
    perceptrons).
  • Feedback networks

Fig. 7. Feedforward Neural Network (Mehrotra,
Mohan and Ranka 1997)
12
The learning process
  • Learning is the process of adjusting the weights
    between nodes to obtain a desired output.
  • Supervised learning
  • Perceptrons a machine that classifies inputs
    according to a linear function.
  • Unsupervised learning
  • Correlation (or Hebbian) learning
  • Competitive learning.
  • Learning algorithms
  • ADALINES (use LSE)
  • Backpropagation

13
Applications
  • Classification
  • knowledge of the class structure is pre-existing
  • Genre, melody, rhythm, timbre gesture
    classification.
  • Clustering
  • Pattern recognition
  • Two types auto-association hetero-association
  • Auto-association the input pattern is assumed to
    be a corrupted form of the desired output.
  • Hetero-association the input pattern is
    associated with an arbitrary output pattern.
  • Audio restoration (e.g., detecting clicks and
    scratches in vinyl).
  • Biofeedback (e.g., gesture to speech translation
    as in Glove-talkII)
  • OCR (e.g., recognition of note head and stems in
    printed scores).
  • note onset detection

14
Applications
  • Function approximation
  • Developing perceptual models (?)
  • Forecasting
  • Algorithmic composition (success stories?)
  • Optimisation

15
Reference
  • Mehrotra, K., C. Mohan, and S. Ranka. 1997.
    Elements of Artificial Neural Networks.
    Massachusetts The MIT Press.
Write a Comment
User Comments (0)
About PowerShow.com