Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Why Artificial Intelligence is Very Hard
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Why Artificial Intelligence is Very Hard
[attachment=23189]
Theo Pavlidis
Distinguished Professor Emeritus
Stony Brook University
t.pavlidis[at]ieee.org
http://theopavlidis.com
What is Artificial Intelligence?
A machine that replicates the functionality of the human brain. (General or Strong AI) “Around the Corner” since about 1945.
A machine that does a specific task that traditionally has been done by humans. (Narrow or Weak AI). Each specific application is treated as an engineering problem. Numerous successes.
Successes in Narrow AI (Seen in daily life)
Restricted Speech Recognition (in Banking and Airline reservation systems, etc)
Credit Card Fraud Detection
Web Tools (Shopping Suggestions, Mechanical Translation, etc)
Simple Robots (Roomba)
1D and 2D Bar Codes (in stores and in shipping)
Successes in Narrow AI (Not Seen Everyday)
Chess Playing Machines
Optical Character Recognition
Industrial Inspection
Biometrics (Fingerprints, Iris, etc)
Medical Diagnosis
Restricted Speech Recognition
Grammar driven models (using low level context) have been quite successful.
High level context is even better. For example, matching a speech fragment to a name on a list.
Successful applications include Airline reservation systems and Call Center monitoring.
See a demonstration of using voice for web search in http://www.youtubewatch?v=npRtTdGeWQA . The system is a product of Nuance Open Voice Search and it relies on personalization.
Making Reading Easy for Computers
Bar codes and two-dimensional symbologies are much easier to read than text because:
They are formally defined.
They include well-defined error detection or, in some cases, error correction codes thus providing their own context.
Examples of Two-Dimensional Symbologies
Chess Playing Machines - 1
Chess is a deterministic game, so a computer could derive a winning solution analytically. However the number of all possible positions is so large (10120) that using even the fastest available computer it will take billions of years to consider all possible moves.
Skilled players may look at 20 moves ahead by pruning, i.e. ignoring non-promising moves.
Chess Playing Machines - 2
Around 1980 Ken Thompson developed a chess playing program called Belle based on a minicomputer with a hardware attachment used to generate moves very fast.
Belle defeated all other computer programs and became the world champion.
The use of special chess knowledge and special purpose hardware became the preferred approach since then.
Deep Blue (The IBM machine that beat the human world champion)
A major focus of the effort was the development of special purpose hardware.
An expert chess player (Murray Campbell ) contributed the evaluation functions of the moves generated by the hardware.
The project had as a consultant an international grandmaster (Joel Benjamin who had played Kasparov to a draw in 1994).
Optical Character Recognition (OCR)
Printed text characters have small shape variability and high contrast with the background.
Spelling checkers (or ZIP code directories in postal applications) introduce low level context.
Reading of the checks sent for payment to American Express relies heavily on context.
Payments are supposed to be in full and the amount due is known, so the number written on a check is analyzed to confirm whether it matches the amount due or not
An Aside: Why did OCR mature when the need for it was diminished?
The algorithms used in the products of the 1990s were known earlier but they were too complex to be implemented effectively with the digital technology of earlier times.
When computer hardware became cheap enough for good OCR, it also became cheap enough for PCs, the Internet, and direct bank transfers.
Keep this in mind in your business plans!
Features of Narrow AI
Each Problem is Solved Separately even though certain common mathematical tools may be used (statistics, graph theory, signal processing, etc).
Each Solution Relies Heavily on Specific Environment Constraints and performance (compared to that of humans) drops when these constraints are relaxed.
Why Not General AI?
Why “waste” time with all the special cases and not solve the general problem once for all?
Why not use a “brain model” to solve all these problems?
Are advances in general computer technology (hardware, systems) likely to help? Why not wait for them rather than solving problems piecemeal?