17-03-2014, 07:22 PM
ABSTRACT
Providing expert system users with informative explanations
is today considered almost as important as giving them
correct solutions. Yet, a majority of explanation systems
were limited to the inspection of a more or less understandable
trace of fired rules. This was detrimental to the quality
of explanations since informations were given in a way that
depended highly on the implementation rather than on the
expertise itself. The only exception to this was the use of
canned text where instances of possible explanations had to
be anticipated. This paper describes a method for automatically
assembling explanations expressed in the expert own
terms which is yet more general than mere canned text techniques.
The method is based on the constitution of an explanatory
knowledge base written in a special purpose language
and allowing the assembly of explanations by means of a full
fledged reasoning conducted on the expert knowledge base as
well as on session traces
1. LNTRODUCTION
The ability of expert systems (ES) to give explanations of
their results and of the reasoning leading to those results is
considered as one of the main advantages of these systems,
as compared to usual programs. In rule-based ES, explanations
are often confined to a trace of the program execution.
A trace is a record of fired rules. It may also include the
data which allows these firings, cast into some readable form,
preferably in natural language. In some approaches, a distinction
has been made between why and how explanations
which correspond respectively to the reasons that led the program
to choose a particular rule and the results obtained by
the firing of that rule. The need for why-not explanations to
justify the non-firing of a rule was also suggested.
dept. LAA/SLC/AIA
Route de Trtfgastel, B.P. 40
22301 Lannion Cedex, France
giUoux[at]lannion.cnet.fr
All these types of explanations rely on the notion of
trace. It seems that the explanations produced depend heavily
on the way the expert knowledge was encoded into rules.
Often, explanations are more reminiscent of the language
provided by the expert system shell rather than of the
language employed by the domain expert.
The aim of our work is to improve the explanation in ES
so that it will be more accurate than a mere trace of the program
execution. We want the system to explain its reasoning
clearly, that is using terms understandable by anyone is
acquainted with the domain at hand.
This paper describes a method for writing an explanation
system that generates outputs using only terms of the problem
domain. This is achieved through the use of an explanation
language which helps in writing an explanatory
knowledge base. This knowledge base is then used to reason
on session traces as well as on ground rules for assembling
explanations.
We first survey features found in already existing systems
and focus on some of their weaknesses. We then describe the
explanation language we have designed and give as an illustration
the example of a linguistic parser built using production
rules.