Next: More Complex Systems Up: Rules and Expert Previous: A Simple Example

Explanation facilities

Most expert systems have explanation facilities that allow the user to ask why it asked some question, and how it reached some conclusion. These questions are answered by referring to the system goals, the rules being used, and any existing problem data. To illustrate the sorts of facilities commonly provided, we'll elaborate on our simple example above, starting with a simple example dialogue involving why and how questions and explanations.

System: Is there gas in the fuel tank?
User: Yes.
System: Is it true that the engine turns over?
User: No.
System Is it true that the lights come on?
User: Why?
System: I am trying to prove: problem with battery.
It has been established that it is not the case that the engine turns over.
Therefore if it is not the case that the lights come on then there is a problem
with battery.
Is it true that the lights come on?
User: No.
System: I conclude that there is a problem with the battery.
User: How?
System: This follows from Rule 2:
IF NOT engine_turns_overAND ...

NOT engine_turns_overwas given by the user.
NOT lights_come_onwas given by the user.

Providing such an explanation facility involves, at least, recording what rules are used in drawing conclusions, and using these records to compose explanations like the ones above. Giving simple explanations like those above is not very difficult (the answers always have a simple set form), and is sometimes quite useful. However, explanation facilities in expert systems are often not widely used, and where used not viewed as acceptable by their users. There are a whole lot of reasons for this, motivating current research in the area. One reason is that the explanations just reference the ``surface'' knowledge encoded in the rules, rather than the ``deep'' knowledge about the domain which originally motivated the rules (but which is usually not represented). So, the system will say that it concluded X because of rule23, but not explain what rule23 is all about. (In the above example, maybe the user needs to understand that both the lights and the starter use the battery, which is the underlying rationale for the second rule in our example). Another stated reason for the frequent failure of explanation facilities is the fact that, if the user fails to understand or accept the explanation, the system can't re-explain in another way (as people can). Explanation generation is a fairly large (and fascinating) area of research, concerned with effective communication: how to we present things so that people are really satisfied with the explanation, and what implications does this have for how we represent the underlying knowledge.



Next: More Complex Systems Up: Rules and Expert Previous: A Simple Example


alison@
Fri Aug 19 10:42:17 BST 1994