21-05-2013, 03:37 PM
FUZZY ENGINEERING EXPERT SYSTEMS WITH NEURAL NETWORK APPLICATIONS
FUZZY ENGINEERING.pdf (Size: 1.17 MB / Downloads: 58)
ARTIFICIAL INTELLIGENCE
The background of artificial intelligence (AI) has been characterized by controversial
opinions and diverse approaches. Despite the controversies, which
have ranged from the basic definition of intelligence to questions about the
moral and ethical aspects of pursuing AI, the technology continues to generate
practical results. With increasing efforts in AI research, many of the prevailing
arguments are being resolved with proven technical approaches. Expert systems,
the main subject of this book, is the most promising branch of AI.
‘‘Artificial intelligence’’ is a controversial name for a technology that promises
much potential for improving human productivity. The phrase seems to
challenge human pride in being the sole creation capable of possessing real
intelligence. All kinds of anecdotal jokes about AI have been offered by
casual observers. A speaker once recounted his wife’s response when he told
her that he was venturing into the new technology of artificial intelligence.
‘‘Thank God, you’re finally realizing how dumb I’ve been saying you were
all these years,’’ was alleged to have been the wife’s words of encouragement.
One whimsical definition of AI is ‘‘Artificial Insemination of knowledge into
a machine.’’ Despite the derisive remarks, serious embracers of AI may yet
have the last laugh. It is being shown again and again that AI may hold the
key to improving operational effectiveness in many areas of applications.
Some observers have suggested changing the term artificial intelligence to a
less controversial one such as intelligent applications (IA). This refers more
to the way that computer and software are used innovatively to solve complex
decision problems.
ORIGIN OF ARTIFICIAL INTELLIGENCE
The definition of intelligence had been sought by many great philosophers
and mathematicians over the ages, including Aristotle, Plato, Copernicus, and Galileo. They attempted to explain the process of thought and understanding.
The real key that started the quest for the simulation of intelligence did not
occur, however, until the English philosopher Thomas Hobbes put forth an
interesting concept in the 1650s. Hobbes believed that thinking consists of
symbolic operations and that everything in life can be represented mathematically.
These beliefs led directly to the notion that a machine capable of
carrying out mathematical operations on symbols could imitate human thinking.
This is the basic driving force behind the AI effort. For that reason
Hobbes is sometimes referred to as the grandfather of artificial intelligence.
THE FIRST AI CONFERENCE
The summer of 1956 saw the first attempt to establish the field of machine
intelligence into an organized effort. The Dartmouth Summer Conference,
organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and
Claude Shannon, brought together people whose work and interest formally
founded the field of AI. The conference, held at Dartmouth College in New
Hampshire, was funded by a grant from the Rockefeller Foundation. It was
at that conference that John McCarthy coined the term ‘‘artificial intelligence.’’
This was the same John McCarthy who developed the LISP programming
language, which has become a standard tool for AI development. In
attendance at the meeting, in addition to the organizers, were Herbert Simon,
Allen Newell, Arthur Samuel, Trenchard More, Oliver Selfridge, and Ray
Solomonoff.
EVOLUTION OF SMART PROGRAMS
The next major step in software technology came from Newell, Shaw, and
Simon in 1959. The program they introduced was called General Problem
Solver (GPS). GPS was intended to be a program that could solve many types
of problems. It was capable of solving theorems, playing chess, or doing
various complex puzzles. GPS was a significant step forward in AI. It incorporates
several new ideas to facilitate problem solving. The nucleus of the
system was the use of means-end analysis, which involves comparing a present
state with a goal state. The difference between the two states is determined
and a search is done to find a method to reduce this difference. This process
is continued until there is no difference between the current state and the goal
state.
NEURAL NETWORKS
Neural networks, sometimes called connectionist systems, are networks of
simple processing elements or nodes capable of processing information in
response to external inputs. Neural networks were originally presented as
models of the human nervous system. Just after World War II, scientists found
out that the physiology of the brain was similar to the electronic processing
mode used by computers. In both cases, large amounts of data are manipulated.
In the case of computers, the elementary unit of processing is the bit,
which is in either an ‘‘on’’ or ‘‘off’’ state. In the case of the brain, neurons
perform the basic data processing. Neurons are tiny cells that follow a binary
principle of being either in a state of firing (on) or not firing (off). When a
neuron is on, it fires a signal to other neurons across a network of synapses.
In the late 1940s Donald Hebb, a researcher, hypothesized that biological
memory results when two neurons are active simultaneously. The synaptic
connection of synchronous neurons is reinforced and given preference over
connections made by neurons that are not active simultaneously. The level of
preference is measured as a weighted value. Pattern recognition, a major
strength of human intelligence, is based on the weighted strengths of the
reinforced connections between various pairs of simultaneously active neurons.
Embedded Expert Systems
More expert systems are beginning to show up, not as stand-alone systems,
but as software applications in large software systems. This trend is bound to
continue as systems integration takes hold in many software applications.
Many conventional commercial packages, such as statistical analysis systems,
data management systems, information management systems, project management
systems, and data analysis systems, now contain embedded heuristics
that constitute expert systems components of the packages. Even some computer
operating systems now contain embedded expert systems designed to
provide real-time systems monitoring and troubleshooting. With the success
of embedded expert systems, the long-awaited payoffs from the technology
are now beginning to be realized.