03-05-2012, 04:01 PM
Artificial Intelligence and Consciousness
conscioushb.pdf (Size: 341.72 KB / Downloads: 170)
Introduction
Computationalism is the theory that the human brain is essentially a computer, although presumably
not a stored-program, digital computer, like the kind Intel makes. Artificial intelligence
(AI) is a field of computer science that explores computational models of problem solving, where
the problems to be solved are of the complexity of problems solved by human beings. An AI
researcher need not be a computationalist, because they1 might believe that computers can do
things brains do noncomputationally. Most AI researchers are computationalists to some extent,
even if they think digital computers and brains-as-computers compute things in different
ways. When it comes to the problem of phenomenal consciousness, however, the AI researchers
who care about the problem and believe that AI can solve it are a tiny minority, as we will see.
Nonetheless, because I count myself in that minority, I will do my best to survey the work of
my fellows and defend a version of the theory that I think represents that work fairly well.
An Informal Survey
Although one might expect AI researchers to adopt a computationalist position on most issues,
they tend to shy away from questions about consciousness. AI has often been accused of being
over-hyped, and the only way to avoid the accusation, apparently, is to be so boring that
journalists stay away from you. As the field has matured, and as a flock of technical problems
have become its focus, it has become easier to bore journalists. The last thing most serious
researchers want is to be quoted on the subject of computation and consciousness.
In order to get some kind of indication of what positions researchers take on this issue, I
conducted an informal survey of Fellows of the American Association for Artificial Intelligence
in the summer of 2003. I sent e-mail to all of them asking the following question:
Research on Computational Models of Consciousness
In view of the shyness about consciousness shown by serious AI researchers, it is not surprising
that detailed proposals about phenomenal consciousness from this group should be few and far
between.
Hofstadter, Minsky, McCarthy
Richard Hofstadter touches on the problem of consciousness in many of his writings, especially
the material he contributed to (Hofstadter & Dennett, 1981). Most of he what he writes
seems to be intended to stimulate or tantalize one’s thinking about the problem. For example,
in (Hofstadter, 1979) there is a chapter (reprinted in (Hofstadter & Dennett, 1981)) in which
characters talk to an anthill. The anthill is able to carry on a conversation because the ants
that compose it play roughly the role neurons play in a brain. Putting the discussion in the
form of a vignette allows for playful digressions on various subjects. For example, the anthill
offers the anteater (one of the discussants) some of its ants, which makes vivid the possibility
that “neurons” could implement a negotiation that ends in their own demise.