15-12-2012, 01:04 PM
BASIC CONCEPTS OF ARTIFICIAL NEURAL NETWORK
BASIC CONCEPTS.docx (Size: 129.87 KB / Downloads: 43)
INTRODUCTION
An artificial neural network (ANN), usually called "neural network" (NN), is a mathematical model or computational model that tries to simulate the structure and/or functional aspects of biological neural networks. It consists of an interconnected group of artificial neurons and processes information using a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network during the learning phase. Neural networks are non-linear statistical data modeling tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data.
HISTORICAL BACKGROUND
Neural network simulations appear to be a recent development. However, this field was established before the advent of computers, and has survived at least one major setback and several eras.
Following an initial period of enthusiasm, the field survived a period of frustration and disrepute. During this period when funding and professional support was minimal, important advances were made by relatively few researchers. These pioneers were able to develop convincing technology which surpassed the limitations identified by Minsky and Papert. Minsky and Papert, published a book (in 1969) in which they summed up a general feeling of frustration (against neural networks) among researchers, and was thus accepted by most without further analysis. Currently, the neural network field enjoys a resurgence of interest and a corresponding increase in funding.
The first artificial neuron was produced in 1943 by the neurophysiologist Warren McCulloch and the logician Walter Pits. But the technology available at that time did not allow them to do too much.
The history of neural networks that was described above can be divided into several periods:
First Attempts: There were some initial simulations using formal logic. McCulloch and Pitts (1943) developed models of neural networks based on their understanding of neurology. These models made several assumptions about how neurons worked.Another attempt was by using computer simulations. Two groups (Farley and Clark, 1954; Rochester, Holland, Haibit and Duda, 1956). The first group (IBM reserchers) maintained closed contact with neuroscientists at McGill University.
BACK- PROPAGATION ALGORITHM
Backpropagation, or propagation of error, is a common method of teaching artificial neural networks how to perform a given task. It was first described by Paul Werbos in 1974, but it wasn't until 1986, through the work of David E. Rumelhart, Geoffrey E. Hinton and Ronald J. Williams, that it gained recognition, and it led to a “renaissance” in the field of artificial neural network research.
It is a supervised learning method, and is an implementation of the Delta rule. It requires a teacher that knows, or can calculate, the desired output for any given input. It is most useful for feed-forward networks (networks that have no feedback, or simply, that have no connections that loop). The term is an abbreviation for "backwards propagation of errors". Backpropagation requires that the activation function used by the artificial neurons (or "nodes") is differentiable.
APPLICATION OF ARTIFICIAL NEURAL NETWORK
The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations. This is particularly useful in applications where the complexity of the data or task makes the design of such a function by hand impractical.
The tasks to which artificial neural networks are applied tend to fall within the following broad categories:
• Function approximation, or regression analysis, including time series prediction fitness approximation and modeling.
• Classification, including pattern and sequence recognition, novelty detection and sequential decision making.
• Data processing, including filtering, clustering, blind source separation and compression.
• Robotics, including directing manipulators, Computer numerical control.
• Optical Character Recognization
• Stock market prediction
• Creating new art forms
• Modeling human behavior
• Loan risk analysis
• Sales forecasting
Application of ANN in exergetic analysis of gas turbines
ANN method is applied to exergetic analyses of gas turbines (GT) by using actual operating data of 3 GTs. These 3 GTs are operating to supply heat and power in a cogeneration system of a ceramic factory, located in Izmir, Turkey.
After assuming which inputs of GTs are needed, comparisons between the exergy values obtained from exergy analysis and the exergy values obtained from ANN method are made. In these comparisons, cross tests are also applied. In an example ANN trained by data of first GT and using this trained data, ANN exergy results are calculated and compared by actual second GT results. All of the results of exergetic analysis of GTs are compared and shown by graphics.
As a result of analysis, ANN is successfully applied to obtain exergetic results of GTs. These are shown by graphics including input, output, fuel, product exergies and exergy destruction results of GTs. RMSE (Root Mean Square Error) values are found under 0.01 which means that data set including inputs and outputs of many GTs would be perfect to obtain much closer exergetic results by an ANN.
Application of Artificial Neural Network for Flood Forecasting
In hydrology, as in a number of diverse fields, there has been an increasing use of Artificial Neural Networks (ANN) as black-box simplified models. This is mainly justified by their ability to model complex non-linear patterns; in addition they can self-adjust and produce a consistent response when ‘trained’ using observed outputs.
A 7-hour ahead forecast in particular proves to be of fairly high precision, especially when an error prediction technique is introduced to theANN models.
The temporal and spatial variability that characterises a river system makes flow forecasting a very demanding task. Flow forecasting is a crucial part of flow regulation and water resources management, as it is related to issues such as drought prevention, flood forecasting for dam and human safety and ecosystem sustainability .
THE FUTURE OF NEURAL NETWORKS
We have only begun to scratch the surface in the development and implementation of Neural networks in commercial applications. It is projected that here will be a lot of development in this area in the years to come. This is largely due to the fact that Neural Networks are a very marketable technology. They are flexible, easy to integrate into a system, it adapts to the data and can classify it in numerous fashions under extreme conditions.
Developments are already in place to create hardware to make Neural Nets faster and more efficient. And though many dream of one day perfecting Neural Nets to create a truly amazing AI System, it is important to remember where the development has taken us, the lessons that have been learned and the barriers that have been over come to get here.
All current NN technologies will most likely be vastly improved upon in the future. Everything from handwriting and speech recognition to stock market prediction will become more sophisticated as researchers develop better training methods and network architectures.
CONCLUSION
The computing world has a lot to gain from neural networks. Their ability to learn by example makes them very flexible and powerful. Furthermore there is no need to devise an algorithm in order to perform a specific task; i.e. there is no need to understand the internal mechanisms of that task. They are also very well suited for real time systems because of their fast response and computational times which are due to their parallel architecture.
Neural networks also contribute to other areas of research such as neurology and psychology. They are regularly used to model parts of living organisms and to investigate the internal mechanisms of the brain.
Perhaps the most exciting aspect of neural networks is the possibility that some day 'consious' networks might be produced. There is a number of scientists arguing that conciousness is a 'mechanical' property and that 'consious' neural networks are a realistic possibility.