Artificial Intelligence Expert Systems Artificial Neural Networks ANNs - PowerPoint PPT Presentation

1 / 18
About This Presentation
Title:

Artificial Intelligence Expert Systems Artificial Neural Networks ANNs

Description:

depth searches not feasible due to high branching at each level ... Connectionist learning models store patterns of activity in networks of small, ... – PowerPoint PPT presentation

Number of Views:516
Avg rating:3.0/5.0
Slides: 19
Provided by: justin79
Category:

less

Transcript and Presenter's Notes

Title: Artificial Intelligence Expert Systems Artificial Neural Networks ANNs


1
Artificial Intelligence / Expert
SystemsArtificial Neural Networks (ANNs)
  • Justin Gaudry
  • June 22, 2007

2
Neural Networks
  • Layered network of artificial neurons
  • Applicable to large state spaces
  • table lookup is impossible
  • depth searches not feasible due to high branching
    at each level
  • Alternative to database lookup combined with
    heuristic search
  • Allows for pattern recognition and the appearance
    of judgment

3
Neurons
  • Connectionist learning models store patterns of
    activity in networks of small, individual
    processing units
  • Modeled after animal brain activity
  • Artificial neurons are models of biological
    neurons

4
Biological Neurons
  • Receives input from dendrites
  • Fired through the soma via action potential
  • Output through the axon to other dendrites
  • Dynamically changes number and strength to other
    neurons (plasticity)
  • Strengthens connections providing correct answers
    and weakens those providing incorrect answers
  • Weakened connection causes less likelihood that
    action will be taken

5
Artificial Neurons
  • Nodes
  • As a model, need dendrites, soma, action
    potential, and axon
  • As a model, need input, activation function to
    provide activation level, and output
  • Input to node causes activation to all connected
    output
  • In a network, each node receives a signal from
    input or other nodes and fires to all connected
    nodes

6
Domain
  • Formulation as a state-space search
  • Ability for occasional reinforcement
  • Large cardinality of states
  • Inexactness (allowing partial match)
  • Approximation (to get close enough)

7
Advantages
  • Continuous, multivariable function
  • H(w,x)
  • w is weight vector, x is input vector
  • Avoids blemish effect
  • Lack of a smooth transition from one stage to the
    next
  • If several stages possible at once, rules
    contradict

8
Non-Linearly Separable Problems
  • Linearly separable problems can divide potential
    outputs with a line
  • Can be represented with a linear function
  • AND, OR, and NOT are linearly separable
  • Non-linearly separable problems cannot cleanly
    separate outputs with a line
  • XOR is non-linearly separable

9
Activation Function
  • A continuous nonlinear sigmoidal activation
    function enables a neural network to map smooth
    nonlinear functions of its input
  • Allows for robust approximation capabilities

10
XOR
11
Usually More Complex
12
Types of Neural Networks
  • 2-layer (associative, Hopfield, Hebb)
  • Multi-layer feedforward (backpropagation)
  • Temporal (recurrent and time-delay)
  • Self-organizing (Kohonen)
  • Supervised and unsupervised

13
2-Layer Associative Network
  • Feedforward
  • Supervised learning
  • Multiplies an input data vector against a
    two-dimensional matrix of weights resulting in an
    output data vector
  • 1x20 20x50 -gt 1x50 for example
  • Linear architecture
  • Cannot approximate nonlinear functions

14
2-Layer Network Procedure
  • X Swixi
  • as I goes from 1 to N where N is the number of
    input nodes
  • Y 1 for X gt t and 0 for x lt t
  • where t is a threshold
  • Often written as Step(X) therefore
  • Y Step(Swixi)

15
Multi-Layer Perceptron
  • Incorporates a layer of hidden nodes
  • Feedforward
  • Usually supervised learning
  • Input recoded into an internal representation via
    multiplication through a weight matrix
  • Output generated by the internal representation
    rather than original input via multiplication
    through a secondary weight matrix
  • Multiple hidden layers can be incorporated

16
Multi-Layer Perceptron
17
Temporal
  • Has connections that go from output back to input
    serving as memory
  • Can also have arbitrary connections between any
    nodes
  • Applicable when output depends on all previous
    inputs

18
Self-Organizing Network
  • Unsupervised learning
  • Kohonen
  • Topographic map formation
  • Spatial location of an output neuron corresponds
    to a particular feature of the input pattern
  • Hebbs Law
  • If two neurons on either side of a connection are
    activated synchronously, increase weight
  • If two neurons on either side of a connection are
    activated asynchronously, decrease weight
Write a Comment
User Comments (0)
About PowerShow.com