Title: dip 1
1 Neural Networks in Automated Visual Inspection
of Manufacturing Parts Dr Panos
Liatsis Information and Biomedical Engineering
Centre Email p.liatsis_at_city.ac.uk Tel
44-2070408126 Fax44-2070408568 www.staff.city.a
c.uk/liatsis
2Presentation Outline
- Automated Visual Inspection
- Problem Definition
- Higher-Order Neural Networks
- Geometric Invariance
- Coarse Coding
- System Overview
- Introduction to Genetic Algorithms
- Determining System Complexity using Genetic
Algorithms - Components Classification and Performance
Evaluation - Conclusions
3Automated Visual Inspection (1/3)
- Automated visual inspection (AVI) aims to
assist- rather than replace- a human operator
using non-contact, optical gauging techniques to
extract information about the quality of a
product. - Some advantages are
- - Reduce manual inspection in high volume
production lines - - Improve product quality in low volume lines
- - Allow inspection under unfavourable
conditions - - Free human operators from dull and routine
labour - - Statistics on test information and record
keeping for management decisions - - Provision of strict safety regulations.
4Automated Visual Inspection (2/3)
- Basic components of an AVI system are
- Process Control synchronisation of major
timing functions, process operator tasks and
control of system database - Parts Handling determination of position, and
orientation - Sensing System illumination, optics, and
sensor electronics
5Automated Visual Inspection (3/3)
- Image Processing extraction of relevant
information/content from the image of the
object/product under inspection - Flaw Analysis Information interpretation for
classification purposes.
6Problem Definition (1/3)
- Flexible Manufacturing Systems (FMS) can produce
any part of a selected family of parts on a
random basis without incurring system downtime
for changeover. - The basis of an FMS is automated (CNC) machining
centres. These depend on opportune delivery of
workpieces, cutting tools and work holding tools
from different areas of the FMS.
7Problem Definition (2/3)
The machining centres expect the peripheral
areas to supply them with parts of a defined
standard in satisfactory condition. This
requirement implies that in each of the
peripheral areas, there is a need to identify and
reject damaged parts. The aim of the current
work is to demonstrate the development of a
reconfigurable AVI system for inspection of
manufacturing components of axisymmetric
geometry, for use in FMS.
8Problem Definition (3/3)
Satisfactory
Damaged
9Higher-Order Neural Networks (1/3)
Higher-order neural networks (HONNs) explore
multi-linear interactions in the inputs to
perform complex non-linear mappings. The output
of a mixed first-order NN is given by
10Higher-Order Neural Networks (2/3)
where
is the bias term
are the first-order terms
11Higher-Order Neural Networks (3/3)
The output of a mixed higher-order NN is given by
12Geometric Invariance (1/3)
- Consider an object and any two non-identical
points A, B on the object. Next an arbitrary
translation and/or rotation of the object within
the image is applied and points A, B become A
and B.
13Geometric Invariance (2/3)
- Since the invariant under translation and/or
rotation is the relative distance between any two
points on the object, the output of the HONN can
be hand-crafted to be invariant to this set of
transformations by considering only the
second-order terms
by constraining the input-hidden weights to
satisfy
14Geometric Invariance (3/3)
- HONNs suffer from the so-called combinatorial
explosion of the higher-order terms. -
- In the case of an MxN image and n-order
combinations, the number of input terms is
augmented by (MxN)!/(MxN-n)!n!, which is not
physically realisable. -
- Imposing constraints, restricts the number of
input necessary combinations, however their
number is still prohibitive.
15Coarse Coding (1/3)
In order to address the issue of combinatorial
explosion, two strategies have been proposed -
Reduced Connectivity strategies these allow
input combinations with specific regional
probability distributions. - Coarse Coding
this proposes a means of representation of the
fine level image information.
16Coarse Coding (2/3)
- Consider an image of 8x8 pixels. This can be
represented by - a set of overlapping but offset coarse grids,
each of size 4x4 - pixels.
- The pixels in each coarse grid are twice as large
as the - pixels in the original fine image.
- This concept is analogous to scale-space
representation in - image analysis, hence allowing the detection of
characteristics - at varying levels of resolution.
17Coarse Coding (3/3)
18System Overview
19Introduction to Genetic Algorithms (1/2)
Genetic Algorithms (GAs) are based on the
doctrine of Darwinian evolution of the survival
of the fittest. The initial population is
random, however with the use of genetic
operators, such as reproduction, crossover
and mutation, future populations become fitter in
solving the problem at hand. In the current
work, we aim to identify a minimal-optimal HONN
architecture that performs the classification
task, with invariance to translation and
rotation.
20Introduction to Genetic Algorithms (2/2)
- The GA procedure is as follows
- (a) Create an initial random population and
evaluate fitness/objective value. - (b) Use mating roulette to select pair of
individuals. Use crossover to reproduce two
children. Mutate them, and test their
homogeneity. - (c) Repeat step (b) for a pre-specified number
of offspring. - (d) Remove equal number of low fitness members
as offspring from population. - (e) Repeat steps (b)-(d) until a pre-specified
number of epochs.
21Determining System Complexity using Genetic
Algorithms
- The images (64x64 pixels) were decomposed into 5
coarse grids, each of 16x16. Training set
consisted of 12 satisfactory and 16 defective
modules, presented in 20 random translations
rotations. The GA run for 300 generations and
converged to an optimal NN topology with 5 hidden
units.
22Components Classification and Performance
Evaluation (1/5)
- The system was tested using 15 unseen components
from each class, presented in 20 random
translations and orientations. The confusion
matrix is shown below
23Components Classification and Performance
Evaluation (2/5)
- Test data were corrupted with variable levels of
salt and pepper noise. The system maintain very
good performance up to a noise level of 20, and
then its performance started to decrease to 85
for a noise level of 40, while it was around 63
at 45 noise. -
24Components Classification and Performance
Evaluation (3/5)
- Next, the data were corrupted with artificial
blurring. The systems performance was maintained
for smoothing masks up to size 4x4, while for
masks of 7x7 its performance was random. -
25Components Classification and Performance
Evaluation (4/5)
- The next test involved the addition of
structured noise, specifically the presence of a
line pattern. The systems performance degraded
slowly with respect to the width of the line.
Recognition rates were acceptable for lines up to
20 pixels width. -
26Components Classification and Performance
Evaluation (5/5)
- Finally, the system was tested with occlusion.
The systems performance was nearly unaffected
for squares of 10x10 pixels, while it became
random for squares of size 90x90.
27Conclusions
- An AVI system with built-in invariance to
rotation and translation has been tested. - The neural network core of the system allows
reconfiguring of the system to the required
inspection problem. - The dynamic nature of the GAs permits
automated determination of the minimal-optimal
hidden layer configuration. - The performance of the system has been
tested with a variety of noise procedures and was
found to be robust to erroneous and incomplete
data samples.