17-03-2014, 09:18 PM
Abstract
We study the problem of making decisions under partial
ignorance, or partially quantified uncertainty. This problem
arises in many applications in robotics and AI, and it
has not yet got the attention it deserves. The traditional
decision rules of decision under risk and under strict uncertainty
(or complete ignorance) can naturally be extended
to the more general case of decision under partial ignorance.
We propose partial probability theory (PPT) for
representing partial ignorance, and we discuss the extension
to PPT of expected utility maximization. We argue
that decision analysis should not be exclusively focused
on optimizing but pay more serious attention to finding
satisfactory actions, and to reasoning with assumptions.
The extended minimax regret decision rule appears to be
an important rule for satisficing.
1. Introduction
In this paper, we study decision making in situations
where the outcomes of the options are (in general) uncertain,
without assuming that this uncertainty can be quantified
by means of a probability measure. This assumption,
which underlies Bayesian decision analysis, is unrealistic
for many practical applications, such as in robotics
and AI, since often the available evidence is insufficient
to determine a unique probability measure which represents
the uncertainty.
Next to decisions under risk, there exists a traditional
category of decision under strict uncertainty (or complete
ignorance), where the decision maker is not able to quantify
this uncertainty at all and can only list the possible
states of nature. We position ourselves between these two
extremes and allow the decision maker to be partially ignorant
about the relevant uncertainties.
As an example of a situation involving partial ignorance,
we can mention a robot receiving information
about its environment from range sensors. An echo from
such a sensor indicates (with some degree of certainty) the
presence of an object at a particular distance from the sensor
and the presence of empty space between that object
and the sensor, but it does not provide information about
the occupancy of the space outside the sensor beam.
In probability theory, absence of information is often
represented by a uniform, or least informative, prior. but
this practice has been questioned by many authors.
Recently, Dempster-Shafer theory [1,14] has become a
popular uncertainty formalism, (partly) because of its capability
to explicitly represent ignorance next to uncertainty.
In 1131 it is argued that the use of Dempster-Shafer
theory instead of probability theory can speed up the process
of building a map from sonar data.
Other examples of application areas where it is very
hard to obtain the necessary (exact) probabilities are sensor
fusion and dynamic environments. In sensor fusion
the problem is that the exact interaction of different sensors
is difficult to assess (see [16,18,19]). In a dynamic
environment it is very hard to justify a particular rate at
which the confidence in conclusions based on former observations
should decrease as time passes (see [3]).
Throughout this paper we will discuss a very simple
example where a robot has to choose between two (or
more) doors, and the available probabilistic information
may be insufficient to determine which door is most
likely to be open. Even in case it is known that door 1 is
more likely to be open than door 2, traditional probabilistic
approaches fail to support the intuitively correct behaviour
of choosing door l.
Of course, intelligent agents have the ability to learn,
and an interesting feature of Bayesian decision analysis is
that it provides a natural way to express the value of new
information. Therefore, in Bayesian decision analysis, the
set of choices of a decision problem, naturally includes
the decision maker's possibilities to obtain new relevant
information. It is not immediately clear whether this can
also be obtained in case the probabilities are not completely
determined.