Assignment 2B: Neural network p' 4950 notes - PowerPoint PPT Presentation

1 / 13
About This Presentation
Title:

Assignment 2B: Neural network p' 4950 notes

Description:

Conduct experiments where you investigate differing architectures and parameter ... 0.7 (Wasserman, Neural Computing: Theory and Practice) 0.5 Beale & Jackson ... – PowerPoint PPT presentation

Number of Views:40
Avg rating:3.0/5.0
Slides: 14
Provided by: CHMat
Category:

less

Transcript and Presenter's Notes

Title: Assignment 2B: Neural network p' 4950 notes


1
Assignment 2B Neural network p. 49-50 notes
  • Objective To develop a Multi-layer perceptron
    (MLP) to classify census respondents into annual
    income gt50K or lt 50K
  • Due date Friday June 1 (5 pm)
  • Use the training and testing prepared in part A.
  • Conduct experiments where you investigate
    differing architectures and parameter settings
    (eg learning rate, momentum, epoch size, number
    of passes, weight initialisation range etc)
  • Always run multiple experiments for each network
    configuration eg 3 or 5.

2
Assignment 2B Neural network p. 49-50 notes
  • Discuss the performance measures that you used
    (eg percentage correct on testing?, RMS error on
    testing?)
  • In your report specify the best performing
    networks that you found (perhaps two or three of
    them) i.e. give architecture, learning rate,
    momentum, number of training passes, epoch size,
    weight initialisation range and random seed. Also
    state testing and training performance
  • Remember You are putting into practice the
    material covered in lectures and tutorials
    particularly the 1st lecture from week 10 and NN
    tutorials 3,4 5. Your report should reflect
    this
  • Use BP to carry out most of your experiments
    (because you can set up batch runs)
  • However to ensure that there are no problems with
    your testing and training files use WINBP to do
    some preliminary experiments

3
Self-organising maps (SOMs)
  • Heuristics and guidelines during training
  • initial weights?
  • initialise weights to normalised values.
  • optimise weight distribution in the network
  • see Beale Jackson, Neural Computing p 116
  • neighbourhood size?
  • begin with a large neighbourhood radius and
    reduce as training progresses
  • i.e coarse mapping followed by finer mapping
  • e.g. from Dayhoff, Neural Network
    Architectures An Introduction
  • 1/3 to 1/2 of the width of the map initially
  • decrease according to
  • d do ( 1 - t / T)
  • where do initial radius
  • t current training iteration number
  • T total training iterations

4
Self-organising maps (SOMs)
  • gain factor (learning rate)
  • should also decrease with time
  • initial value?
  • 0.2 - 0.5 (Dayhoff)
  • 0.7 (Wasserman, Neural Computing Theory and
    Practice)
  • gt 0.5 Beale Jackson
  • could be defined using a linear decreasing
    function
  • i.e. gain factor f(t)
  • where, t time (no of examples
    presented)
  • a similar function to that for the
    neighbourhood

5
Self-organising maps (SOMs)
  • Topology of the feature map (Kohonen) Layer
  • The specification of the topology determines the
    shape of the kohenen layer
  • This in turn determines the number of neighbours
    each unit has

6
(No Transcript)
7
(No Transcript)
8
now for a demonstration!
9
Self-organising maps (SOMs)
  • A sample application The Phonetic Typewriter
  • (see Beale Jackson p. 123ff and Dayhoff p
    188-191)
  • a speech recognition problem
  • a system to classify phonemes - the basic unit of
    speech
  • Kohonen was attempting to build a type writer
    which could type from dictation

10
Self-organising maps (SOMs)
  • Network architecture
  • 15 input neurons, 96 neurons in the kohonen'
    layer
  • trained on continuous speech --- sampled every 8
    ms
  • the auditory input was pre-processed into a
    vector representing the signal strength across 15
    frequency bands
  • training results in the feature layer being
    organised into clusters based on the phonemes.
  • the map is labelled by hand i.e. individual
    phonemes are presented to the trained network and
    the closest' cluster is assigned to that
    phoneme.
  • post-processing
  • the translation from the phonetic to the written
    word
  • errors due to co-articulation have to be
    corrected (i.e. the variation in pronunciation
    caused by the context in which the phoneme is
    used)

11
Self-organising maps (SOMs)
  • a rule base (15 000 - 20 000 rules) is used to
    construct the grammar
  • results
  • the phontopic map classifies 80-90
  • correction by the rule base increases this to
    between 92 and 97

12
Self-organising maps (SOMs)
13
Self-organising maps (SOMs)
Write a Comment
User Comments (0)
About PowerShow.com