Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Adapting to Increasing Data Availability using Multi-Layered Self-Organising Maps
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Abstract—Often in clustering scenarios, the data analyst does
not have access to a complete data set at the outset and new data
dimensions might only become available at some later time. In
this case it is useful to be able to cluster the available data and
have some mechanism for incorporating new dimensions as they
become available without having to recluster all the data from
scratch (which may not be feasible for on-line learning scenarios).
This paper utilises an established mechanism for interconnecting
multiple Self-Organising Maps to achieve this aim and reveals a
useful way of visualising the affect of individual dimensions on
the structure of clusters.
Index Terms—Self-Organising Maps, Adaptive Learning.
I. INTRODUCTION
This paper is concerned with a type of neural network architecture
and algorithm called a Self-Organising Map (SOM)
which is a firmly established data mining technique for unsupervised
clustering. SOMs owe much of their initial development
to the work of Teuvo Kohonen [1] whose research
is in turn based upon a model of the visual cortex of higher
vertebrates developed by Von der Malsburg in 1973 [2].
The SOM algorithm maps input vectors to a lower dimensional
lattice of neurons. After repeatedly exposing a SOM
network to input stimuli, the SOM adjusts its internal state
variables (synaptic weightings) so that similar input vectors
map closer together on the output lattice, and gradually this
builds a lower dimensional, granular representation of the
input data. The purpose of SOMs is essentially to perform a
type of topological vector quantisation of the input source with
the aim of reducing the dimensionality of the input data but
still preserve as much as possible of the spatial relationships
within the data.
The architecture of the SOM consists of an l dimension
array of input neurons which are fully connected to a p
dimensional lattice of neurons (p is typically 1, 2, or 3 as this
facilitates easy visualisation). The architecture of the SOM
(with p = 2) is illustrated in Fig. 1.
For each lattice neuron there is a corresponding l dimensional
weight vector. The SOM output layer is essentially
a winner takes all layer whereby the output neuron whose
weight vector is closest to the input vector is activated and
the activation of all other lattice neurons are suppressed. The
original SOM architecture is trained by gradually aligning the
weight vector of the winning neuron (and to a lesser degree,
neighbouring neurons) to lie closer to the input vectors as they
are presented to the network.