Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Artificial Intelligence: Neural Networks Simplified
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Artificial Intelligence: Neural Networks Simplified

[attachment=22846]

Abstract

Now standing as a major paradigm for data mining applications, Neural Networks
have been widely used in many fields due to its ability to capture complex patterns present
in the data. Under its feature extractor capability, Neural Networks extrapolate past pattern
into future ones and thereby relieves the burden of having recourse to complex detection
algorithm in case of pattern recognition such as face detection or fingerprint detection. The
development of Neural Networks has been so rapid that they are now referred as the sixth
generation of computing

Introduction

Considered as a subset of Artificial Intelligence, NN basically constitutes a computer program
designed to learn in a manner similar to the human brain. Axons which have an electrical signal and
found only on output cells, terminate at synapses that connect it to the dendrite of another neuron. NN,
throughout the overall part of the analysis, pertains to ANN and not biological NN. NN take after the
human brain on the following two grounds, first, they acquire knowledge via the network through a
learning process and secondly, interneuron connection strengths are used to store the acquired
knowledge. Perhaps, the best agreed upon definition of NN is that they constitute nonparametric
regression models which do not require a priori assumptions about the problem with the date being
enabled to speak for itself. Ironically, the origin of NN is not directly linked to any estimation or
forecasting exercise since the persons that pioneered research in this field was basically attempting to
have an insight of the learning abilities of human brain. However, as NN displayed interesting learning
capacity, this led towards interests in other fields.

Features of the Brain-Stylized facts: Neuron and Perceptron

Prior to gaining insight of NN, it becomes proper to have recourse towards the basic components of the
brain, labeled as neurons. A neuron signifies a minute processor that receives, processes and sends data
to the next layer of the model. The brain is composed of about 100 billions neurons of many distinct
types. Neurons are grouped together into an intricate network and they work by transmitting electrical
impulses. The reaction of a neuron following receipt of an impulse, will hinge on the intensity of the
impulses received together with the neuron’s degree of sensitivity towards the neurons that dispatched
the neurons. There are two operations being performed inside a neuron, the first being the computation
of the weighted sum of all of the inputs while the second one converts the output of the summation in
terms of certain threshold. A neuron has basically four main parts: the cell body which constitutes the
spot for processing and generating impulses, the dendrites which are synonymous with signal
receivers, that basically accept signal from outside or other neurons.

Network Architecture

The simplest form of NN consists of only two layers, the input and output layer (no hidden layer is
present). This is sometimes referred to as the skip layer, which basically constitutes a conventional
linear regression modeling in a NN design whereby the input layer is directly connected to the output
layer, hence bypassing the hidden layer. Like any other network, this simplest form of NN relies on
weight as the connection between an input and the output; the weight representing the relative
significance of a specific input in the computation of the output. It is important to bear in mind that the
output generated will heavily depend on the type of activation function used. However, based on the
fact that the hidden layer confers strong learning ability to the NN, in practical applications, a three and
above three NN architecture is used. This is shown in figure 1. It is vital to distinguish between two
classes of weights in NN; first there are those weights that aim to connect the inputs to the hidden layer
and then those weights that connect the hidden layer to the output layer. In a parallel manner, there are
two classes of activation functions, one found in the hidden layer and one in the output layer.

Conclusion

This paper provided a simplified approach to Neural Networks along with explanations on its different
components. Neural Networks have gained so much ground that they are now termed as the sixth
generation of computing. As a matter of fact, Neural Networks have been applied in many fields such
as science, finance, credit risk, economics and econometrics. Its ability to learn and being flexible
render it a powerful tool though the black box problem reduces its usefulness. Nonetheless, the
predictive power of NN cannot be denied and this is making it still one of the best forecasting tools not
only among practitioners, let alone for central bankers in the world.