Lecture 6. Perceptron Simplest Neural Network - PowerPoint PPT Presentation

1 / 15
About This Presentation
Title:

Lecture 6. Perceptron Simplest Neural Network

Description:

Perceptron is one of the first and simplest artificial neural networks, which ... replaced by a smooth nonlinear activation function such as the sigmoid function: ... – PowerPoint PPT presentation

Number of Views:888
Avg rating:3.0/5.0
Slides: 16
Provided by: vshko
Category:

less

Transcript and Presenter's Notes

Title: Lecture 6. Perceptron Simplest Neural Network


1
Lecture 6. Perceptron Simplest Neural Network

Saint-Petersburg State Polytechnic
University Distributed Intelligent Systems
Dept ARTIFICIAL NEURAL NETWORKS
  • Prof. Dr. Viacheslav P. Shkodyrev
  • EMail shkodyrev_at_imop.spbstu.ru

Perceptron is one of the first and simplest
artificial neural networks, which was presented
in the middle of 50-th years. It was a first
mathematical model, which demonstrates new
paradigms of machine learning computational
environment, and a threshold logic units model of
classification tasks.
2
Perceptron was proposed by Rosenblatt (1958)
as the first model for learning with a teacher,
i.e. supervised learning. This model is an
enhancement of threshold logic units (TLUs)
which used for the classification of patterns
said to be linearly separable. How to formalize
and interpret this model?
  • Objective of Lecture 6
  • This lecture introduces the simplest class
    of neural networks perseptron and its
    application to pattern classification. It is
    possible to interpret the functionality of
    perceptron and threshold logic units model
    geometrically via a separating hyper plane. In
    this lecture we will define what we mean by a
    perceptron learning rule, explain the P networks
    and its training algorithms, discuss the
    limitation of the P networks. You will learn
  •      What is a single-layer perceptron via
    threshold logic units model
  •      Perceptrons as Linear and Non-Linear
    Classifiers via threshold logic theory
  •      Multi-Layer Perceptrons networks
  •      Perceptron Learning Rules

3
It was P which presents at first time a new
paradigms of training computation algorithms.
We solve a classification task when we
assign any image, represented by a feature
vector, to one of two classes, which we shall
denote by F of A, so that class A corresponds to
the character a and class F corresponds to b.
Using the perceptron training algorithm, we
may to classify two linearly separable classes
4
Architecture of SLP
The single-layer perceptron was the first
simplest model that generates a great interest
due to its ability to generalize from its
training vectors and learn from initially
randomly distributed connections.
Input pattern x
5
Geometric interpretation of Threshold Logic Units
6
Exclusive-Or (XOR) gate Problem Minsly,
Papert, 1969
Solution of the Exclusive-Or (XOR) gate Problem
x2
Linear separable surface can not to solve the
Exclusive-Or gate classification tasks
XOR-Logic
Overcoming of problem is multi-layers network
7
With an one-input two-layers Perceptron we have
close separable area with catting out boundary
u2(1)w2(1)x b
u1(1)
w1(1)
u1(1)w1(1)x b
w1(2)
u (2)
y
x
w2(2)
w2(1)
u2(1)
Close separable boundary at 1-Dimension
space of x
where
8
With a two-inputs two-layers Perceptron net we
can realize a convex separable surface
Layer 2
Layer1
Convex separable boundary at 2-Dimension space
9
With a three-layers two-inputs Perceptron net we
can realize a non-convex separable surface
Complex concave separable surface
where
10
The aim of learning is to minimize the
instantaneous squared error of the output signal.
Solution of Task
where
Widrow-Hoff learning rule (delta rule on
base of state error minimization
Rosenblatts learning rule on base of quantized
error minimization
Modified Rosenblatts learning rule on base of
non-quantized error minimization
11
The first original perceptron learning rule for
adjusting the weights was developed by Rosenblatt.
We determine the cost function via quantized
error e
Then weights change value is
where
or
- is a vector of quantized error with element ej.
12
Applying the algebraic transformation
We get the final equation
We determine the modified cost function via
quantized error e
13
Learning of a SLP illustrates a supervised
learning rule which aims to assign the input
patterns x1, x2, ,xp to one of the
prespecified classes or categories with desired
response if perceptron outputs for every classes
we know in advanced the desired response.
14
The Rosenblatts learning rule realises the
weights change value as
15
  • Marvin L., Minsly, S.Pafert Perceptrons
    Expanded Edition Introduction to Computation
    Geometry.
  • Haykin S. Neural Networks A Comprehensive
    Foundation. Mac-Millan. N.York.1994.
  • Laurence Fausett Fundamentals of Neural
    Networks Architecture, Algorithms, and
    Applications . Prentice Hall, 1994.
  • Cichocki A. Unbehauen R. Neural Networks for
    Optimization and Signal Processing, Wiley, 1993.
Write a Comment
User Comments (0)
About PowerShow.com