Title: Overview of Minicon Project
1Overview of Minicon Project
- Condition monitoring and diagnostics for
elevators - Dale Addison
- CENTRE FOR ADAPTIVE SYSTEMS
- University of Sunderland
- School of Computing Technology
2Project overview
- MINICON
- Minimum Cost Maximum Benefit Condition Monitoring
- Framework 5, Competitive and Sustainable Growth,
project value 3 million) - Two aspects
- Condition monitoring of elevators
- Condition monitoring of high speed machine tools
(gt15000 rpm)
3Project partners
- Kone (4th largest supplier of elevators in the
world) (Finland) - VTT (Finland)
- Goratu (Bilbao, Spain
- Tekniker (Bilbao, Spain)
- Rockwell Manufacturing (Belgium Czechoslovakia)
- Technical University of Talinn (Estonia)
- IB Krates (Estonia)
- Monitran ltd (UK)
- University of Sunderland, (UK)
- Entek (UK)
- Truth (Athens, Greece)
- ICCS/NTUA (Athens, Greece)
4Application software and database with second
level Intelligence. Prior Knowledge Intelligence
System
Maintenance Management System With third level of
intelligence, human, providing Decision Support
Plant or Machine to be monitored e.g. Elevator
or machine tool
Service Engineer with Hand held PC, Email or
paging device etc.
5(No Transcript)
6Neural networks
- Adaptive technology device based upon neurons
found in the human brain - Neurons are connected together and send signals
to each other. (networks) - Signals are summed and when they exceed a certain
limit, the neuron fires (sends signal to other
neurons) - Networks can be trained using algorithms which
respond to the data.
7A Neural network
- Multi-layer perceptron
- Each neuron performs a biased weighted sum of
their inputs. - This activation level is passed through a
transfer function (usually sigmoidal) to produce
its output, - Neurons are arranged in a layered feed forward
topology.
8Multi layer perceptrons
- Networks weights and thresholds are adjusted by a
training algorithm which alters the weights
according to the training data. - This ensures the smallest possible difference
between the input data and the outputs
9Dimensionality reduction techniques
- Principal Components Analysis
- Non-Linear
- Weight regularisation techniques (Weigand method)
- Genetic algorithms
10Non-Linear principal components analysis (Auto
Associative network)
- Neural network which uses its inputs as outputs
- Has at least one hidden layer, with less neurons
than the input and output layers, which have the
same number of neurons - Data is effectively squeezed through a lower
dimensionality
11Non-linear Principal components analysis
12Auto associative training
- Produce Auto associative training set (Inputs map
to outputs) - Create auto associative MLP
- 5 layers
- Middle hidden layer has less units than output
layers - Other two hidden layers have a relatively large
number of neurons, both should have the same
number - Train network on data set
- Delete last two layers
- Collect reduced dimensionality input data,
replace the original input data, retain original
output variables - Create a second neural networkand train on the
reduced data set.
13Use of Genetic Algorithms
- GAs are an optimisation technique which use
Darwins concept of survival of the fittest to
breed successively better strings according to an
objective function - In this problem, that function helps to determine
subsets of inter-related bits (correlated or
mutually required inputs)
14Sensitivity analysis
- The sensitivity of a particular variable is
determined, by running the network on a set of
test cases, and accumulate the network error. - The network is then run using the same cases, but
without certain information used
earlier,(specific input variable) and the network
error accumulated. - The sensitivity error is the ratio of error with
missing value substitution to the original error.
15Neural Network weight regularisation
- Promotes low curvature by encouraging small
weights to model the feature surface - Adds extra term to the error function which
penalises gratuitous larger weights - Also prefers to tolerate a mixture of large and
small weights, rather than medium sized weights
16Neural networks used
- Multi Layer Perceptrons (MLP)
- Radial basis function (RBF)
- Self Organisng Feature Maps (Kohonen)
- Experiments were ran on several different data
sets, recorded over a number of time periods
(day, 2 days one week)
17Radial basis function nets
- Feature space is divided up using circles
(hyperspheres) - Characterised by centre and radius
- Response surface is a gaussian (bell shaped curve)
18Radial basis function networks
- RBF consists of
- Hidden Layer of Radial Units modelling a Gaussian
response surface (only one) - Training of RBFs
- Centres and Deviations of Radial Units are set
- Linear Output layer is optimised
- Centres assigned to reflect clustering of data
19Radial basis function networks
- Centre assignment methods
- Sub sampling Random number of training points
are copied to the radial units - K-means algorithm Set of points are selected and
placed at the centre of clusters of training data.
20Radial basis function networks
- Deviation assignment (Determines how spiky the
gaussian functions are) - Explicit (choose yourself)
- Isotropic heuristic method using the number of
centres and volume of space occupied - K-NN Units deviation is individually set to the
mean distance of its K-NN - Small in tightly packed areas (preserves details)
- Higher in sparse areas of space
21Artificial Neuron, and the Kohonen SOFM
22Application to MINICON project
- Picture shows test elevator used at KONE site,
sensors mounted at various sites and results used
as input to neural networks - Self Organising feature maps and Multi-Layer
perceptrons trained on a variety of elevator
data.
23Results
24Results for Genetic Algorithm
25Results for Sensitivity Analysis
26Results of weight regularisation techniques
27Results for auto association
28Best performing technique per data set
29Final Networks
- Two types of neural networks used in final
product - Multi-Layer Perceptrons(with weight
regularisation applied) - SOFM
- Both networks require different numbers of inputs
depending on the data set (5-15)
30Alternative methods
- Use of Statistical techniques
- Mean, Kurtosis, Standard Deviation
- For example the mean of one parameter suggests
a significant rise for data set number 5. Since
all the other data sets show a consistent mean
then this example seems to be highly significant.
31Use of mean value
32Conclusions
- Removing input data does not improve
classification performance - Statistical techniques not consistent to make
reliable estimates - MLPs and SOFMs are best performing NN
techniques - MLPs performance can be improved by applying
weight regularisation techniques.