08-02-2013, 03:27 PM
REVIEW PAPER ON MODELS OF ARTIFICAL NEURON
REVIEW PAPER ON MODELS OF ARTIFICAL NEURON.docx (Size: 164.51 KB / Downloads: 29)
Abstract:
Artifical neuron network (ANN) are non linear mapping structures based on the function of human brain.They are powerful tools for modeling especially when the underlying data relationship is known.ANN can identify and learn correlated patterns between input data sets and corresponding target value.After training,they can be used to predict the outcome of new independent input data.There are different models of artificial neural networks- The McCulloch-Pitts Model of Neuron ,The ADALINE Model and Preceptron Model which are discussed in this paper.
INTRODUCTION
An artificial neural network is a system based on the operation of biological neural networks.Major aspect of artificial neuron network is that there are different architectures,which consequenty requires different types of algorithms,but despite to be an apparently complex system,a neural network is relatively simple. [1]A artificial neural network is developed which optimizes a criteria commonly called learning rule.The input/output training data is fundamental for these networks as it conveys the information .Artifical neural networks are also called as Connectionist network,Neurocomputers,Parallel Distributed Processors.[5] When an element of the neural network fails, it can continue without any problem by their parallel nature.[7]
The McCulloch-Pitts Model of Neuron
The early model of an artificial neuron is introduced by Warren McCulloch and Walter Pitts in 1943. The McCulloch-Pitts neural model is also known as linear threshold gate. It is a neuron of a set of inputs I1,I2,I3,IM and output y. The linear threshold gate simply classifies the set of inputs into two different classes.[2] Thus the output y is binary. Such a function can be described mathematically using these equations:
The Adaline Model
An important generalisation of the perceptron training algorithm was presented by Widrow and
Hoff as the 'least mean square' (LMS) learning procedure, also known as the delta rule. The
main functional diference with the perceptron training rule is the way the output of the system is
used in the learning rule. The perceptron rule uses the output of the threshold function (either -1 or +1) for learning.The delta-rule uses the net output without further mapping into
output values -1 or +1.The learning rule was applied to the 'adaptive linear element,' also named Adaline2, developed by Widrow and Hoff (Widrow & Hoff, learning 1960). [3]
PERCEPTRON MODEL
Perceptrons are the simplest architecture to learn when studying Neural Networking. Picture you mind of a perceptron as a node of a wide, interconnected neural network, sort of like a data tree, although the neural network does not necessarily have to have a top and bottom sections. The connections among all the nodes not only show the relationship between the nodes but also transmit data and information, called a signal or impulse. The perceptron is a simple model of a neuron .The perceptron is a computational model of retina of the eye and hence ,is named ‘perceptron’.The network comprises of photodetectors or sensory units.Since making connections of perceptrons into a neural structure is a bit complicated, let’s take a perceptron by itself.