Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: A Learning Algorithm for Self-Organizing Maps Based on a Low-Pass Filter Scheme
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Abstract— In this work a novel training algorithm is proposed
for the formation of topology preserving maps. In the proposed
algorithm the weights are updated incrementally by using a
higher-order difference equation, which implements a low pass
digital filter. It is shown that by suitably choosing the filter the
learning process can adaptively follow a specific dynamic.
Keywords- self-organizing maps; incremental learning;
digital filters.
I. INTRODUCTION
The self-organizing map (SOM), with its variants, is one
of the most popular neural network algorithms in the
unsupervised learning category [1]. The topology preserving
mapping from a high dimensional space to a low
dimensional grid formed by the SOM finds a very wide
range of applications [2]. The basic SOM consists in an
ordered grid of a finite number of cells i ? 1?D located in a
fixed, low dimension output array, where a distance metric
d ?c,i? is defined. Each unit is associated to a model vector
(or weight) qi ?n that lives in the high dimensional space
of the input patterns r? ? ?n , where ? is the dataset to
be analyzed. The distribution of the model vectors, after the
training, resemble a vector quantization of the input data
with the important additional feature that model vectors tend
to assume, in the input space, the topological arrangement of
the correspondent cells in the predefined output grid. The
incremental learning algorithm of the SOM follows a
competitive learning paradigm, where the matching of the
models with the input is increased by moving all the model
vectors in a radial direction with respect to the input sample:
i ? 1? i ? ? ? ? ( ) i ? ?, 1 ,
ci q t ? ? q t ? h t r t ? q t i ? ?D ? ?
The update step for the cell i is proportional to a time
dependent smoothing kernel function ? ? ci h t , where c is the
winner node:
? argmin? ? ? i ? ? ?
i
c ? r t ? q t ? ? ?
A widely applied kernel is written in terms of the
Gaussian function centered over the winner node:
? ? ? ? ? exp? ? , ?2 2 2 ? ? ci h t ? ? t ? ?d c i ? t ? ?
The scalar function ? ? ci h t has a central role in the
learning process defined by (1), and a large part of the
literature on SOM is focused on the theoretical and practical
aspects of the different possible implementations of this
kernel function [3]-[6]. The common feature of all this SOM
variants is that the structure of (1) is maintained almost
unchanged and the focus is on the function ? ? ci h t , which
only controls the length of the update step, whereas the
direction is always radial with respect to the input sample.
This behaviour has theoretical support only for static kernel
width, a condition reached in the final convergence phase of
the learning [1]. However during the first phase, when the
width of the kernel is shrinking, and the global topology of
the input data is learned, a smoother change of the update
direction than the one forced by (1) would be a desired
feature.
To obtain this feature, in this paper a novel SOM model
is proposed where the learning is accomplished by means of
a higher order linear difference equation that implements a
predefined tracking filter. In the proposed model each cell
contains a set of memory vectors in order to take trace of the
past positions of the model vectors and of the inputs. The
update direction is defined by the dynamic of a properly
defined filter that guides the movement of the model vectors
as a combination of the past vectors. In that way, the
proposed model gives a general framework to define
different training strategies, where the basic SOM is a
particular case.