28-01-2013, 04:00 PM
Self Organizing Maps: Fundamentals
Self Organizing Maps.pdf (Size: 146.69 KB / Downloads: 276)
What is a Self Organizing Map?
So far we have looked at networks with supervised training techniques, in which there is a
target output for each input pattern, and the network learns to produce the required outputs.
We now turn to unsupervised training, in which the networks learn to form their own
classifications of the training data without external help. To do this we have to assume that
class membership is broadly defined by the input patterns sharing common features, and
that the network will be able to identify those features across the range of input patterns.
One particularly interesting class of unsupervised system is based on competitive learning,
in which the output neurons compete amongst themselves to be activated, with the result
that only one is activated at any one time. This activated neuron is called a winner-takesall
neuron or simply the winning neuron. Such competition can be induced/implemented
by having lateral inhibition connections (negative feedback paths) between the neurons.
The result is that the neurons are forced to organise themselves. For obvious reasons, such
a network is called a Self Organizing Map (SOM).
Topographic Maps
Neurobiological studies indicate that different sensory inputs (motor, visual, auditory, etc.)
are mapped onto corresponding areas of the cerebral cortex in an orderly fashion.
This form of map, known as a topographic map, has two important properties:
1. At each stage of representation, or processing, each piece of incoming information is
kept in its proper context/neighbourhood.
2. Neurons dealing with closely related pieces of information are kept close together so
that they can interact via short synaptic connections.
Our interest is in building artificial topographic maps that learn through self-organization
in a neurobiologically inspired manner.
We shall follow the principle of topographic map formation: “The spatial location of an
output neuron in a topographic map corresponds to a particular domain or feature drawn
from the input space”.
Setting up a Self Organizing Map
The principal goal of an SOM is to transform an incoming signal pattern of arbitrary
dimension into a one or two dimensional discrete map, and to perform this transformation
adaptively in a topologically ordered fashion.
We therefore set up our SOM by placing neurons at the nodes of a one or two dimensional
lattice. Higher dimensional maps are also possible, but not so common.
The neurons become selectively tuned to various input patterns (stimuli) or classes of
input patterns during the course of the competitive learning.
The locations of the neurons so tuned (i.e. the winning neurons) become ordered and a
meaningful coordinate system for the input features is created on the lattice. The SOM
thus forms the required topographic map of the input patterns.
We can view this as a non-linear generalization of principal component analysis (PCA).
Ordering and Convergence
Provided the parameters (s0, ts, h0, th) are selected properly, we can start from an initial
state of complete disorder, and the SOM algorithm will gradually lead to an organized
representation of activation patterns drawn from the input space. (However, it is possible
to end up in a metastable state in which the feature map has a topological defect.)
There are two identifiable phases of this adaptive process:
1. Ordering or self-organizing phase – during which the topological ordering of the
weight vectors takes place. Typically this will take as many as 1000 iterations of
the SOM algorithm, and careful consideration needs to be given to the choice of
neighbourhood and learning rate parameters.
2. Convergence phase – during which the feature map is fine tuned and comes to
provide an accurate statistical quantification of the input space. Typically the
number of iterations in this phase will be at least 500 times the number of neurons
in the network, and again the parameters must be chosen carefully.
Overview of the SOM Algorithm
We have a spatially continuous input space, in which our input vectors live. The aim is
to map from this to a low dimensional spatially discrete output space, the topology of
which is formed by arranging a set of neurons in a grid. Our SOM provides such a nonlinear
transformation called a feature map.
The stages of the SOM algorithm can be summarised as follows:
1. Initialization – Choose random values for the initial weight vectors wj.
2. Sampling – Draw a sample training input vector x from the input space.
3. Matching – Find the winning neuron I(x) with weight vector closest to input vector.
4. Updating – Apply the weight update equation Dwji t Tj I t xi wji =h( ) , (x)( ) ( - ).
5. Continuation – keep returning to step 2 until the feature map stops changing.
Next lecture we shall explore the properties of the resulting feature map and look at some
simple examples of its application.