03-11-2012, 11:40 AM
FAULT-TOLERANT AND BAYESIAN APPROACHES TO SELFORGANIZING
NEURAL NETWORKS
FAULT-TOLERANT.pdf (Size: 35.35 KB / Downloads: 26)
ABSTRACT
A self-organizing map (SOM) is a type of artificial neural network that is trained using
unsupervised learning to produce a low-dimensional (typically two dimensional), discretized
representation of the input space of the training samples, called a map. The map seeks to
preserve the topological properties of the input space.
This makes SOM useful for visualizing low-dimensional views of high-dimensional data, akin to
multidimensional scaling. Like most artificial neural networks, SOMs operate in two modes:
training and mapping. Training builds the map using input examples. It is a competitive process,
also called vector quantization. Mapping automatically classifies a new input vector.
NETWORK STRUCTURE
A self-organizing map consists of components called nodes or neurons. Associated with each
node is a weight vector of the same dimension as the input data vectors and a position in the map
space. The usual arrangement of nodes is a regular spacing in a hexagonal or rectangular grid.
The self-organizing map describes a mapping from a higher dimensional input space to a lower
dimensional map space. The procedure for placing a vector from data space onto the map is to
find the node with the closest weight vector to the vector taken from data space and to assign the
map coordinates of this node to our vector.
While it is typical to consider this type of network structure as related to feed forward networks
where the nodes are visualized as being attached, this type of architecture is fundamentally
different in arrangement and motivation.
Useful extensions include using toroidal grids where opposite edges are connected and using
large numbers of nodes. It has been shown that while self-organizing maps with a small number
of nodes behave in a way that is similar to K-means, larger self-organizing maps rearrange data
in a way that is fundamentally topological in character.
LEARNING ALGORITHM
The goal of learning in the self-organizing map is to cause different parts of the network to
respond similarly to certain input patterns. This is partly motivated by how visual, auditory or
other sensory information is handled in separate parts of the cerebral cortex in the human brain.
The weights of the neurons are initialized either to small random values or sampled evenly from
the subspace spanned by the two largest principal component eigenvectors. With the latter
alternative, learning is much faster because the initial weights already give good approximation
of SOM weights.
The network must be fed a large number of example vectors that represent, as close as possible,
the kinds of vectors expected during mapping. The examples are usually administered several
times.