Self Organizing Maps - PowerPoint PPT Presentation

About This Presentation
Title:

Self Organizing Maps

Description:

http://www.ai-junkie.com/ann/som/som1.html. SOM's are invented by Teuvo Kohonen. ... They represent multidimensional data in much lower dimensional spaces ... – PowerPoint PPT presentation

Number of Views:145
Avg rating:3.0/5.0
Slides: 11
Provided by: alext8
Category:
Tags: junkie | maps | organizing | self

less

Transcript and Presenter's Notes

Title: Self Organizing Maps


1
Self Organizing Maps
2
Self Organizing Maps
  • This presentation is based on
  • http//www.ai-junkie.com/ann/som/som1.html
  • SOMs are invented by Teuvo Kohonen.
  • They represent multidimensional data in much
    lower dimensional spaces - usually two
    dimensions.
  • Common example is the mapping of colors from
    their three dimensional components - red, green
    and blue, into two dimensions.
  • 8 colors on the right have been presented as 3D
    vectors and the system has learnt to represent
    them in the 2D space.
  • In addition to clustering the colors into
    distinct regions, regions of similar properties
    are usually found adjacent to each other.

3
Network Architecture
  • Data consists of vectors, V, of n dimensions
    V1, V2, V3Vn
  • Each node will contain a corresponding weight
    vector W, of n dimensions W1, W2, W3...Wn.

4
Network Example
  • Each node in the 40-by-40 lattice has three
    weights, one for each element of the input
    vector
  • red, green and blue.
  • Each node is represented by a rectangular cell
    when drawn to display.

5
Overview of the Algorithm
  • Idea Any new, previously unseen input vector
    presented to the network will stimulate nodes in
    the zone with similar weight vectors.
  • Each node's weights are initialized.
  • A vector is chosen at random from the set of
    training data and presented to the lattice.
  • Every node is examined to calculate which one's
    weights are most like the input vector. The
    winning node is commonly known as the Best
    Matching Unit (BMU).
  • The radius of the neighborhood of the BMU is now
    calculated. This is a value that starts large,
    typically set to the 'radius' of the lattice, 
    but diminishes each time-step. Any nodes found
    within this radius are deemed to be inside the
    BMU's neighborhood.
  • Each neighboring node's (the nodes found in step
    4) weights are adjusted to make them more like
    the input vector. The closer a node is to the
    BMU, the more its weights get altered.
  • Repeat step 2 for N iterations.

6
Details
  • Initializing the Weights
  • Set to small standardized random values 0 lt w lt 1
  • Calculating the Best Matching Unit
  • Use some distance
  • Determining the Best Matching Unit's Local
    Neighborhood

7
Details
Over time the neighborhood will shrink to the
size of just one node... the BMU
8
Details
  • Adjusting the Weights
  • Every node within the BMU's neighborhood
    (including the BMU) has its weight
    vector adjusted according to the following
    equation
  • where t represents the time-step and L is a small
    variable called the learning rate, which
    decreases with time.
  • The decay of the learning rate is calculated each
    iteration using the following equation

9
Details
  • Also, the effect of learning  should be
    proportional to the distance a node is from the
    BMU.

10
Applications
  • SOMs are commonly used as visualization aids.
  • They can make it easy to see relationships
    between vast amounts of data.
Write a Comment
User Comments (0)
About PowerShow.com