30-06-2012, 12:07 PM
SHORT-TERM LOAD FORECASTING USING ARTIFICIAL NEURAL NETWORK TECHNIQUES
SHORT-TERM LOAD FORECASTING.pdf (Size: 979.79 KB / Downloads: 321)
WHAT ARE ANNs?
Work on artificial neural network has been motivated right from its inception by the recognition that the human brain computes in an entirely different way from the conventional digital computer. The brain is a highly complex, nonlinear and parallel information processing system. It has the capability to organize its structural constituents, known as neurons, so as to perform certain computations many times faster than the fastest digital computer in existence today. The brain routinely accomplishes perceptual recognition tasks, e.g. recognizing a familiar face embedded in an unfamiliar scene, in approximately 100-200 ms, whereas tasks of much lesser complexity may take days on a conventional computer.
WHY DO WE USE NEURAL NETWORKS?
Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an "expert" in the category of information it has been given to analyse. This expert can then be used to provide projections given new situations of interest and answer "what if" questions.
HISTORY OF ANN
Neural network simulations appear to be a recent development. However, this field was established before the advent of computers, and has survived at least one major setback in several eras. Many important advances have been boosted by the use of inexpensive computer emulations. Following an initial period of enthusiasm, the field survived a period of frustration and disrepute. During this period when funding and professional support was minimal, important advances were made by relatively few researchers.
BENEFITS OF ANN
1. They are extremely powerful computational devices.
2. Massive parallelism makes them very efficient.
3. They can learn and generalize from training data – so there is no need for enormous feats of programming.
3. They are particularly fault tolerant – this is equivalent to the “graceful degradation” found in biological systems.
4. They are very noise tolerant – so they can cope with situations where normal symbolic systems would have difficulty.
5. In principle, they can do anything a symbolic/logic system can do, and more