Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: EVOLVABLE BINARY ARTIFICIAL NEURAL NETWORK FOR DATA CLASSIFICATION
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
EVOLVABLE BINARY ARTIFICIAL NEURAL NETWORK FOR DATA CLASSIFICATION
[attachment=23693]
ABSTRACT
This paper describes a new evolvable hardware
organization and its learning algorithm to generate binary
logic artificial neural networks based on mutual
information and statistical analysis. First, thresholds to
convert analog signals of the training data to digital
signals are established. In order to extract feature function
for multidimensional data classification, conditional
entropy is calculated to obtain maximum information in
each subspace. Next, dynamic shrinking and expansion
rules are developed to build the feed forward neural
networks. At last, hardware mapping of learning patterns
and on-board testing are implemented on Xilinx FPGA
board.
KEYWORDS
Conditional entropy, binary logic artificial neural
network, dynamic function generation
1. INTRODUCTION
Artificial neural networks (ANN) have attracted the
attention of many researchers in different areas, such as
neuroscience, mathematics, physics, electrical and
computer engineering, and psychology. Generally, such
systems consist of a large number of simple neuron
processing units performing computation by a dense mesh
of nodes and connections. In addition, they have the very
attractive properties of adaptiveness, self-organization,
nonlinear network processing and parallel processing. A
lot of efforts have been put on applications involving
classification, association, decision-making and reasoning.
Recently, evolutionary algorithms have been
suggested by many researchers to find well performing
architectures for artificial neural networks, which employ
the evolutionary dynamics of reproduction, mutation,
competition, and selection [1][2]. Evolutionary
algorithms, like genetic algorithms, evolutionary
programming, and evolution strategies are well suited to
the task of evolving ANN architectures [3]. Much work
has been done on combining the evolutionary computation
techniques and neural network [4][5][6][7][8][9][10]. The
most challenging part is how to determine the link
between structure and functionality of ANN so that
optimization of the composition of a network can be
implemented using evolutionary methods. Popular
methods like genetic algorithms, which use parse trees to
construct neural network, greatly depend on the qualities
of their approaches to evolve the interconnection weight
[11]. In this paper, an efficient method to construct
dynamically evolvable artificial neural network is
proposed. It is based on mutual information calculation to
threshold the input data, organize logic blocks and their
interconnections, and to select logic blocks, which are
most informative about data distribution. In addition,
expansion and shrinking algorithms are formulated to link
the multiple layers of the binary feed forward neural
network.
With the rapid advance of VLSI microelectronics
technology, large computational resources on a single chip
become available. Reconfigurable computing and FPGA
technology make fast massively parallel computations
more affordable and neural networks with architecture
adapted to a specific task can be easily designed.
Since most of the signals in the real world are analog,
many researchers have developed analog neural networks
[12][13][14]. But in such systems, matching between the
simulated analog neuron model and the hardware
implementation is critical. Other problems like noise,
crosstalk, temperature variation, and power supply
instability also limit system performance. Moreover, the
programmability is also hard to achieve for analog neural
network design. As a result, many people turn to digital
logic for an alternative solution. The flexibility, and
accuracy plus mature commercial CAD tools for digital
system design greatly save the time and effort of design
engineers. Moreover, the recent fast development of FPGA
technology has created a revolution in logic design. With
the dramatic advances in device performance and density
combined with development tools, programmable logic
provides a completely new way of designing and
developing systems. On the other hand, a hardware
description language VHDL becomes more and more
popular for logic design and uses the complete tools for
VHDL code compiling, functional verification, synthesis,
place and route, timing verification and bit file
downloading. These greatly facilitate the FPGA logic
system design and also make it possible to design artificial
neural network on board.
In our design, Matlab program generates logic
structures based on the learning procedure. Then VHDL
codes are written and simulated to map the evolved
structures to programmable hardware. At last, the results
of learning are implemented on the FPGA board. Highly
parallel structure is developed to achieve the fast speed
neural network for large data set classification. The class
characteristic codes generated during learning procedure
are saved on board, so that comparators can judge the
success or failure of test data easily. Moreover, the finite
state machine structure facilitates the selection and the
display of different class test results. Real time testing of
the implemented structures on Xilinx board is successful
with high correct classification rate.
In this paper, section 2 covers the theoretical
justification and the major algorithm, which describes our
algorithm and hardware mapping procedure of evolvable
multiplayer neural network for data classification. A
thresholding rule to construct digital representations for
analog signal characteristics was developed based on
mutual information, so that we can utilize the digital logic
to build a binary logic artificial neural network. Then
FPGA structures are evolved using entropy measures,
statistical analysis and dynamic interconnections between
logic gates. One complete design example is demonstrated
in section 3. This includes data generation, threshold
establishment, dynamic neural network generation, and
hardware implementation of evolvable binary neural
network. At last, the conclusion and reference are put at
the end of this paper.
2. LEARNING STRATEGY
Multidimensional data sets can represent many real
world problems from astronomy, electrical, engineering,
remote sensing or medicine. The classification and
clustering of these data sets are meaningful. To test our
approach, we generated random data class sets for the
learning and training procedures to simulate the real
world problems. Each real type data represents one analog
signal. Then, we construct threshold surface to separate
data sets in multidimensional space and also to obtain
binary codes for all signals. The further division of space
into subspaces to clarify the classification of these binary
codes is a core of our developed learning algorithms. In
order to keep maximum information at one subspace, the
selection of input functions to a layer of perceptrons is
performed dynamically based on conditional entropy
analysis. At the same time, the structure of one layer obeys
the expansion and shrinking rule. Many layers can then be
cascaded, with outputs of one layer connected to the inputs
of the next layer, to form a feed-forward type network. The
decision of making division of each space can be
represented in a table, and each row of the table
corresponds to the feature code classifying each class data
set.
2.1 Data Preparation
In order to simulate the real world signals, we
generate multi-dimensional random variables, chosen
from the normal distributions. The mean value and
covariance matrix is different for each class data set. Half
of the data sets in each class are selected for learning
procedure, and another half are used for training. These
data sets projected onto two dimensions can overlap.
Moreover, each class can have different ellipse shape with
major axis in different directions.