FeedForward Neural Networks - PowerPoint PPT Presentation

1 / 63
About This Presentation
Title:

FeedForward Neural Networks

Description:

Hyperspace separation. w1 x1 w2 x2 = x1. x2. x1. x2. y. w1. w2. 1. 0. What Can an ANN Do? ... Hyperspace Partition & Region Encoding Layer. 101. L1. L2. L3 ... – PowerPoint PPT presentation

Number of Views:423
Avg rating:3.0/5.0
Slides: 64
Provided by: taiwe
Category:

less

Transcript and Presenter's Notes

Title: FeedForward Neural Networks


1
Feed-Forward Neural Networks
  • ???

2
Content
  • Introduction
  • Multilayer Perceptron
  • Back Propagation Learning Algorithm

3
Feed-Forward Neural Networks
  • Introduction

4
Artificial Neural Networks
  • To simulate human brain behavior
  • A new generation of information processing system.

5
Applications
  • Pattern Matching
  • Pattern Recognition
  • Associate Memory (Content Addressable Memory)
  • Function Approximation
  • Learning
  • Optimization
  • Vector Quantization
  • Data Clustering

6
Applications
Traditional Computers are inefficient on these
tasks, although their computation speed is fast.
  • Pattern Matching
  • Pattern Recognition
  • Associate Memory (Content Addressable Memory)
  • Function Approximation
  • Learning
  • Optimization
  • Vector Quantization
  • Data Clustering

7
Processing Units of an ANN
  • The Configuration of ANNs
  • Consist of a large number of interconnected
    processing elements called neurons.
  • A human brain consists of 1011 neurons of many
    different types.
  • How ANN works?
  • Collective behavior.

8
The Biologic Neurons
9
The Artificial Neurons
10
The Artificial Neurons
wij positive excitatory negative
inhibitory zero no connection ?i bias
Proposed by McCulloch and Pitts 1943 called M-P
neurons
11
What Can a Neuron Do?
  • A hard limiter.
  • A binary threshold unit.
  • Hyperspace separation.

0
1
12
What Can an ANN Do?
  • A neurally inspired mathematical model.
  • Consists a large number of highly interconnected
    PEs.
  • Its connections (weights) holds knowledge.
  • The response of PE depends only on local
    information.
  • Its collective behavior demonstrates the
    computation power.
  • With learning, recalling and, generalization
    capability.

13
Basis Entities of an ANN
  • Models of Neurons or PEs.
  • Models of synaptic interconnections and
    structures.
  • Training or learning rules

14
Feed-Forward Neural Networks
  • Multilayer Perceptron

15
Single-Layer Perceptron
Training Set
16
Single-Layer Perceptron
Training Set
What it can?
What it cannot?
17
Multilayer Perceptron
Output Layer
Hidden Layer
Input Layer
18
Multilayer Perceptron
Where the knowledge from?
Classification
Output
Analysis
Learning
Input
19
How an MLP Works?
Example
  • Not linearly separable.
  • Is a single layer perceptron workable?

XOR
20
How an MLP Works?
Example
00
01
11
21
How an MLP Works?
Example
00
01
11
22
How an MLP Works?
Example
00
01
11
23
How an MLP Works?
Example
24
Parity Problem
Is the problem linearly separable?
25
Parity Problem
x3
P1
P2
x2
P3
x1
26
Parity Problem
111
011
001
000
27
Parity Problem
111
011
001
000
28
Parity Problem
111
P4
011
001
000
29
Parity Problem
P4
30
General Problem
31
General Problem
32
Hyperspace Partition
33
Region Encoding
001
000
010
100
101
110
111
34
Hyperspace Partition Region Encoding Layer
35
Region Identification Layer
36
Region Identification Layer
37
Region Identification Layer
38
Region Identification Layer
39
Region Identification Layer
40
Region Identification Layer
41
Region Identification Layer
42
Classification
0
?1
1
43
Feed-Forward Neural Networks
  • Back Propagation Learning algorithm

44
Activation Function Sigmoid
Remember this
45
Supervised Learning
Training Set
Output Layer
Hidden Layer
Input Layer
46
Supervised Learning
Training Set
Sum of Squared Errors
Goal
Minimize
47
Back Propagation Learning Algorithm
  • Learning on Output Neurons
  • Learning on Hidden Neurons

48
Learning on Output Neurons
?
?
49
Learning on Output Neurons
depends on the activation function
50
Learning on Output Neurons
Using sigmoid,
51
Learning on Output Neurons
Using sigmoid,
52
Learning on Output Neurons
53
Learning on Output Neurons
How to train the weights connecting to output
neurons?
54
Learning on Hidden Neurons
?
?
55
Learning on Hidden Neurons
56
Learning on Hidden Neurons
?
57
Learning on Hidden Neurons
58
Learning on Hidden Neurons
59
Back Propagation
60
Back Propagation
61
Back Propagation
62
Learning Factors
  • Initial Weights
  • Learning Constant (?)
  • Cost Functions
  • Momentum
  • Update Rules
  • Training Data and Generalization
  • Number of Layers
  • Number of Hidden Nodes

63
Reading Assignments
  • Shi Zhong and Vladimir Cherkassky, Factors
    Controlling Generalization Ability of MLP
    Networks. In Proc. IEEE Int. Joint Conf. on
    Neural Networks, vol. 1, pp. 625-630, Washington
    DC. July 1999. (http//www.cse.fau.edu/zhong/pubs
    .htm)
  • Rumelhart, D. E., Hinton, G. E., and Williams, R.
    J. (1986b). "Learning Internal Representations by
    Error Propagation," in Parallel Distributed
    Processing Explorations in the Microstructure of
    Cognition, vol. I, D. E. Rumelhart, J. L.
    McClelland, and the PDP Research Group. MIT
    Press, Cambridge (1986).
  • (http//www.cnbc.cmu.edu/plaut/85-419/papers/Rum
    elhartETAL86.backprop.pdf).
Write a Comment
User Comments (0)
About PowerShow.com