Single-Layer Perceptrons - PowerPoint PPT Presentation

1 / 10
About This Presentation
Title:

Single-Layer Perceptrons

Description:

AILAB.,CSD.,KAIST. 99.3.29. Learning-Rate Annealing Schedules(I) When learning rate is large, trajectory may follow zigzagging path ... – PowerPoint PPT presentation

Number of Views:165
Avg rating:3.0/5.0
Slides: 11
Provided by: Jinhyu
Category:

less

Transcript and Presenter's Notes

Title: Single-Layer Perceptrons


1
  • Single-Layer Perceptrons
  • (3.7 3.9)
  • CS679 Lecture Note
  • Compiled by Youngshin Kim
  • AILAB.,CSD.,KAIST
  • 99.3.29.

2
Learning-Rate Annealing Schedules(I)
  • When learning rate is large, trajectory may
    follow zigzagging path
  • When it is small,procedure may be slow
  • Simplest learning rate parameter
  • Stochastic approximation
  • time varing
  • when c is large, danger of parameter blowup for
    small n.

3
Learning-Rate Annealing Schedules(II)
  • Search then converge schedule
  • in early stage, learning rate parameter is
    approximately equal to
  • for a number of iteration n large compared to
    search time constant ,
  • learning rate parameter approximates as c/n

4
Perceptron(I)
  • Goal
  • classifying applied Input
    into one of two classes
  • Procedure
  • if output of hard limiter is 1, to class
    if it is -1, to class
  • input of hard limiter weighted sum of input
  • effect of bias b is merely to
  • shift decision boundary away
  • from origin
  • synaptic weights adapted on
  • iteration by iteration basis

5
Perceptron(II)
  • Decision regions separated by a hyperplane
  • point (x1,x2) above boundary line
  • is assigned to
  • point (y1,y2) below boundary line
  • to class

6
Perceptron Convergence Theorem(I)
  • Linearly separable
  • if two classes are linearly separable, there
    exists decision surface consisting of hyperplane.
  • If so, there exists weight vector
  • for only linearly
  • separable classes,
  • perceptron works
  • well

7
Perceptron Convergence Theorem(II)
  • Using modified signal-flow graph
  • bias b(n) is treated as synaptic weight driven by
    fixed input 1
  • is
  • linear combiner output

8
Perceptron Convergence Theorem(III)
  • Weight adjustment
  • if x(n) is correctly classified
  • otherwise
  • learning rate parameter controls
    adjustment applied to weight vector

9
Perceptron Convergence Theorem(IV)
  • Fixed increment convergence theorem
  • for linearly separable vectors X1 and X2 ,
    perceptron converges after some n0 iterations
  • is solution vector for
  • proof in case of
  • see text book for detail

10
Summary
  • Initialization
  • set w(0)0
  • Activation
  • at time step n, activate perceptron by applying
    continuous valued input vector x(n) and desired
    response d(n)
  • Computation of actual response
  • Adaptation of Weight Vector
  • Continuation
  • inclement time step n and go back to step 2
Write a Comment
User Comments (0)
About PowerShow.com