Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Systolic array/Support vector machine classifier design
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Systolic array/Support vector machine
classifier design


[attachment=65918]

1Introduction

Systolic architecture represent a network of processing elements (PE) that rhythmically
compute and pass data through the system. These PE regularly pump data in and out such that a
regular flow of data is maintained. As a result, systolic system feature modularity and regularity,
which are important properties for VLSI design.
It is invented by Kung and Leiserson (1978). Ever since Kung proposed the systolic
model..‘ its elegant solutions to demanding problems and its potential performance have attracted
great attention. In physiology, the term systolic describes the contraction (systole) of the heart,
which regularly sends blood to all cells of the body through the arteries, veins, and capillaries.
Analogously. Systolic computer processes perform operations in a rhythmic. incremental,
cellular. And repetitive manner[13].
Typically all PEs in systolic array are uniform and fully pipelined, i.e., all communicating
edges among the PEs contain delay elements, and the whole system usually contains only local
interconnections. However some relaxatiohave been introduced to increase the utility of
systolic arrays. These relaxation include use of not only local but also neighbor interconnections,
use of data broadcast operations, and use of different PEs in the system. Especially at the
boundaries.

1 Linear discriminant analysis

There are many possible techniques for classification of data. Principal Component
Analysis (PCA) and Linear Discriminant Analysis (LDA) are two commonly used techniques for
data classification and dimensionality reduction. Linear Discriminant Analysis easily handles the
case where the within-class frequencies are unequal and their performances has been examined
on randomly generated test data. This method maximizes the ratio of between-class variance to
the within-class variance in any particular data set thereby guaranteeing maximal separability.
The use of Linear Discriminant Analysis for data classification is applied to classification
problem in speech recognition. We decided to implement an algorithm for LDA in hopes of
providing better[15].
classification compared to Principal Components Analysis. The prime difference between
LDA and PCA is that PCA does more of feature classification and LDA does data classification.
In PCA, the shape and location of the original data sets changes when transformed to a different
space whereas LDA doesn’t change the location but only tries to provide more class separability
and draw a decision region between the given classes. This method also helps to better
understand the distribution of the feature data. Figure 2.1 will be used as an example to explain
and illustrate the theory of LDA.

2 MATHEMATICAL OPERATIONS

In this section, the mathematical operations involved in using LDA will be analyzed the aid of
sample set in Figure 1. For ease of understanding, this concept is applied to a two-class problem.
Each data set has 100 2-D data points. Note that the mathematical formulation of this
classification strategy parallels the Matlab implementation associated with this work

1 What is a Neural Network?

An Artificial Neural Network (ANN) is an information processing paradigm that is
inspired by the way biological nervous systems, such as the brain, process information. The key
element of this paradigm is the novel structure of the information processing system. It is
composed of a large number of highly interconnected processing elements (neurones) working in
unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured
for a specific application, such as pattern recognition or data classification, through a learning
process. Learning in biological systems involves adjustments to the synaptic connections that
exist between the neurones. This is true of ANNs as well

5 Network layers

The commonest type of artificial neural network consists of three groups, or layers, of
units: a layer of "input" units is connected to a layer of "hidden" units, which is connected to a
layer of "output" units. (see Figure 3.1)
-The activity of the input units represents the raw information that is fed into the network.
-The activity of each hidden unit is determined by the activities of the input units and the
weights on the connections between the input and the hidden units.
 The behaviour of the output units depends on the activity of the hidden units and the
weights between the hidden and output units. This simple type of network is interesting
because the hidden units are free to construct their own representations of the input. The
weights between the input and hidden units determine when each hidden unit is active,
and so by modifying these weights, a hidden unit can choose what it represents. We also
distinguish single-layer and multi-layer architectures. The single-layer organisation, in
which all units are connected to one another, constitutes the most general case and is of 14
Dept of ECE, GU
more potential computational power than hierarchically structured multi-layer
organisations. In multi-layer networks, units are often numbered by layer, instead of
following a global numbering.


5 Network layers

The commonest type of artificial neural network consists of three groups, or layers, of
units: a layer of "input" units is connected to a layer of "hidden" units, which is connected to a
layer of "output" units. (see Figure 3.1)
-The activity of the input units represents the raw information that is fed into the network.
-The activity of each hidden unit is determined by the activities of the input units and the
weights on the connections between the input and the hidden units.
 The behaviour of the output units depends on the activity of the hidden units and the
weights between the hidden and output units. This simple type of network is interesting
because the hidden units are free to construct their own representations of the input. The
weights between the input and hidden units determine when each hidden unit is active,
and so by modifying these weights, a hidden unit can choose what it represents. We also
distinguish single-layer and multi-layer architectures. The single-layer organisation, in
which all units are connected to one another, constitutes the most general case and is of 14
Dept of ECE, GU
more potential computational power than hierarchically structured multi-layer
organisations. In multi-layer networks, units are often numbered by layer, instead of
following a global numbering.



CONCLUSION-

To design an efficient classifier in a systolic array I have studied different
classifiers. Among all of them I found Support vector machine classifier the most efficient(Due
to the reason mentioned above).This is because I am going to design this classifier in a systolic
array. Already two attempts have been taken to design the same. But they are still not time, space
and power efficient. In my next step I will concentrate on removing these problems in this
design