Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Software Agents
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Software Agents
[attachment=28507]
An Introduction to Software Agents
Jeffrey M. Bradshaw
Since the beginning of recorded history, people have been fascinated with
the idea of non-human agencies.1 Popular notions about androids, humanoids,
robots, cyborgs, and science fiction creatures permeate our culture,
forming the unconscious backdrop against which software agents are perceived.
The word “robot,” derived from the Czech word for drudgery, became
popular following Karel Capek’s 1921 play RUR: Rossum Universal Robots.
While Capek’s robots were factory workers, the public has also at times embraced
the romantic dream of robots as “digital butlers” who, like the mechanical
maid in the animated feature “The Jetsons,” would someday putter about
the living room performing mundane household tasks. Despite such innocuous
beginnings, the dominant public image of artificially intelligent embodied creatures
often has been more a nightmare than a dream. Would the awesome
power of robots reverse the master-slave relationship with humans? Everyday
experiences of computer users with the mysteries of ordinary software, riddled
with annoying bugs, incomprehensible features, and dangerous viruses reinforce
the fear that the software powering autonomous creatures would pose
even more problems. The more intelligent the robot, the more capable of pursuing
its own self-interest rather than its master’s. The more humanlike the robot,
the more likely to exhibit human frailties and eccentricities. Such latent concerns
cannot be ignored in the design of software agents—indeed, there is more
than a grain of truth in each of them!
Though automata of various sorts have existed for centuries, it is only with
the development of computers and control theory since World War II that anything
resembling autonomous agents has begun to appear. Norman (1997) observes
that perhaps “the most relevant predecessors to today’s intelligent agents
are servomechanisms and other control devices, including factory control and
the automated takeoff, landing, and flight control of aircraft.” However, the
agents now being contemplated differ in important ways from earlier concepts.
Significantly, for the moment, the momentum seems to have shifted from hardware
to software, from the atoms that comprise a mechanical robot to the bits
that make up a digital agent (Negroponte 1997).2
Alan Kay, a longtime proponent of agent technology, provides a thumbnail
sketch tracing the more recent roots of software agents:
“The idea of an agent originated with John McCarthy in the mid-1950’s, and the
term was coined by Oliver G. Selfridge a few years later, when they were both at
the Massachusetts Institute of Technology. They had in view a system that, when
given a goal, could carry out the details of the appropriate computer operations and
could ask for and receive advice, offered in human terms, when it was stuck. An
agent would be a ‘soft robot’ living and doing its business within the computer’s
world.” (Kay 1984).
Nwana (1996) splits agent research into two main strands: the first beginning
about 1977, and the second around 1990. Strand 1, whose roots are mainly in distributed
artificial intelligence (DAI), “has concentrated mainly on deliberativetype
agents with symbolic internal models.” Such work has contributed to an understanding
of “macro issues such as the interaction and communication between
agents, the decomposition and distribution of tasks, coordination and cooperation,
conflict resolution via negotiation, etc.” Strand 2, in contrast, is a recent, rapidly
growing movement to study a much broader range of agent types, from the moronic
to the moderately smart. The emphasis has subtly shifted from deliberation
to doing; from reasoning to remote action. The very diversity of applications and approaches
is a key sign that software agents are becoming mainstream.
The gauntlet thrown down by early researchers has been variously taken up
by new ones in distributed artificial intelligence, robotics, artificial life, distributed
object computing, human-computer interaction, intelligent and adaptive
interfaces, intelligent search and filtering, information retrieval, knowledge
acquisition, end-user programming, programming-by-demonstration, and a
growing list of other fields. As “agents” of many varieties have proliferated,
there has been an explosion in the use of the term without a corresponding consensus
on what it means. Some programs are called agents simply because they
can be scheduled in advance to perform tasks on a remote machine (not unlike
batch jobs on a mainframe); some because they accomplish low-level computing
tasks while being instructed in a higher-level of programming language or
script (Apple Computer 1993); some because they abstract out or encapsulate the
details of differences between information sources or computing services
(Knoblock and Ambite 1997); some because they implement a primitive or aggregate
“cognitive function” (Minsky 1986, Minsky and Riecken 1994); some because
they manifest characteristics of distributed intelligence (Moulin and
Chaib-draa 1996); some because they serve a mediating role among people and
programs (Coutaz 1990; Wiederhold 1989; Wiederhold 1992); some because
they perform the role of an “intelligent assistant” (Boy 1991, Maes 1997) some
because they can migrate in a self-directed way from computer to computer
4 BRADSHAW
(White 1996); some because they present themselves to users as believable characters
(Ball et al. 1996, Bates 1994, Hayes-Roth, Brownston, and Gent 1995);
some because they speak an agent communication language (Genesereth 1997,
Finin et al. 1997) and some because they are viewed by users as manifesting intentionality
and other aspects of “mental state” (Shoham 1997).
Out of this confusion, two distinct but related approaches to the definition of
agent have been attempted: one based on the notion of agenthood as an ascription
made by some person, the other based on a description of the attributes that software
agents are designed to possess. These complementary perspectives are summarized
in the section “What Is a Software Agent.” The subsequent section discusses
the “why” of software agents as they relate to two practical concerns: 1)
simplifying the complexities of distributed computing and 2) overcoming the limitations
of current user interface approaches. The final section provides a chapter
by chapter overview of the remainder of the book.
What Is a Software Agent?
This section summarizes the two definitions of an agent that have been attempted:
agent as an ascription, and agent as a description.
‘Agent’ as an Ascription
As previously noted, one of the most striking things about recent research and
development in software agents is how little commonality there is between different
approaches. Yet there is something that we intuitively recognize as a
“family resemblance” among them. Since this resemblance cannot have to do
with similarity in the details of implementation, architecture, or theory, it must
be to a great degree a function of the eye of the beholder.3 “Agent is that agent
does”4 is a slogan that captures, albeit simplistically, the essence of the insight
that agency cannot ultimately be characterized by listing a collection of attributes
but rather consists fundamentally as an attribution on the part of some
person (Van de Velde 1995).5
This insight helps us understand why coming up with a once-and-for-all
definition of agenthood is so difficult: one person’s “intelligent agent” is another
person’s “smart object”; and today’s “smart object” is tomorrow’s “dumb program.”
The key distinction is in our expectations and our point of view. The
claim of many agent proponents is that just as some algorithms can be more easily
expressed and understood in an object-oriented representation than in a procedural
one (Kaehler and Patterson 1986), so it sometimes may be easier for developers
and users to interpret the behavior of their programs in terms of agents
rather than as more run-of-the-mill sorts of objects (Dennett 1987).6
The American Heritage Dictionary defines an agent as “one that acts or has
AN INTRODUCTION TO SOFTWARE AGENTS 5
the power or authority to act… or represent another” or the “means by
which something is done or caused; instrument.” The term derives from the
present participle of the Latin verb agere: to drive, lead, act, or do.
As in the everyday sense, we expect a software agent to act on behalf of someone
to carry out a particular task which has been delegated to it.7 But since it is
tedious to have to spell out every detail, we would like our agents to be able to
infer what we mean from what we tell it. Agents can only do this if they
“know” something about the context of the request. The best agents, then,
would not only need to exercise a particular form of expertise, but also take into
account the peculiarities of the user and situation.8 In this sense an agent fills the
role of what Negroponte calls a “digital sister-in-law:”
“When I want to go out to the movies, rather than read reviews, I ask my sister-inlaw.
We all have an equivalent who is both an expert on movies and an expert on
us. What we need to build is a digital sister-in-law.
In fact, the concept of “agent” embodied in humans helping humans is often one
where expertise is indeed mixed with knowledge of you. A good travel agent
blends knowledge about hotels and restaurants with knowledge about you… A
real estate agent builds a model of you from a succession of houses that fit your
taste with varying degrees of success. Now imagine a telephone-answering agent, a
news agent, or an electronic-mail-managing agent. What they all have in common
is the ability to model you.” (Negroponte 1997).
While the above description would at least seem to rule out someone claiming
that a typical payroll system could be regarded as an agent, there is still
plenty of room for disagreement (Franklin and Graesser 1996). Recently, for example,
a surprising number of developers have re-christened existing components
of their software as agents, despite the fact that there is very little that
seems “agent-like” about them. As Foner (1993) observes:
“… I find little justification for most of the commercial offerings that call themselves
agents. Most of them tend to excessively anthropomorphize the software, and
then conclude that it must be an agent because of that very anthropomorphization,
while simultaneously failing to provide any sort of discourse or “social contract” between
the user and the agent. Most are barely autonomous, unless a regularly-scheduled
batch job counts. Many do not degrade gracefully, and therefore do not inspire
enough trust to justify more than trivial delegation and its concomitant risks.”9
Shoham provides a practical example illustrating the point that although
anything could be described as an agent, it is not always advantageous to do so:
“It is perfectly coherent to treat a light switch as a (very cooperative) agent with the
capability of transmitting current at will, who invariably transmits current when it
believes that we want it transmitted and not otherwise; flicking the switch is simply
our way of communicating our desires. However, while this is a coherent view,
it does not buy us anything, since we essentially understand the mechanism
sufficiently to have a simpler, mechanistic description of its behavior.” (Shoham
1993).10
6 BRADSHAW
Dennett (1987) describes three predictive stances that people can take toward
systems (table 1). People will choose whatever gives the most simple, yet reliable
explanation of behavior. For natural systems (e.g., collisions of billiard balls), it
is practical for people to predict behavior according to physical characteristics
and laws. If we understand enough about a designed system (e.g., an automobile),
we can conveniently predict its behavior based on its functions, i.e., what it
is designed to do. However as John McCarthy observed in his work on “advicetakers”
in the mid-1950’s, “at some point the complexity of the system becomes
such that the best you can do is give advice” (Ryan 1991). For example, to predict
the behavior of people, animals, robots, or agents, it may be more appropriate
to take a stance based on the assumption of rational agency than one based
on our limited understanding of their underlying blueprints.11
Singh (1994) lists several pragmatic and technical reasons for the appeal of
viewing agents as intentional systems:
“They (i) are natural to us, as designers and analyzers; (ii) provide succinct descriptions
of, and help understand and explain, the behaviour of complex systems; (iii)
make available certain regularities and patterns of action that are independent of
the exact physical implementation of the agent in the system; and (iv) may be used
by the agents themselves in reasoning about each other.”