Next: Explanation facilities Up: Rules and Expert Previous: Rules and Expert

A Simple Example

This is much better explained through a simple example. (You should maybe look back at the notes on rule-based system if it is unclear.) Suppose that we have the following rules:

  1. IF engine_getting_petrol
    AND engine_turns_over
    THEN problem_with_spark_plugs
  2. IF NOT engine_turns_over
    AND NOT lights_come_on
    THEN problem_with_battery
  3. IF NOT engine_turns_over
    AND lights_come_on
    THEN problem_with_starter
  4. IF petrol_in_fuel_tank
    THEN engine_getting_petrol

Our problem is to work out what's wrong with our car given some observable symptoms. There are three possible problems with the car: problem_with_spark_plugs, problem_with_battery, problem_with_starter. We'll assume that we have been provided with no initial facts about the observable symptoms.

In the simplest goal-directed system we would try to prove each hypothesised problem (with the car) in turn. First the system would try to prove ``problem_with_spark_plugs''. Rule 1 is potentially useful, so the system would set the new goals of proving ``engine_getting_petrol'' and ``engine_turns_over''. Trying to prove the first of these, rule 4 can be used, with new goal of proving ``petrol_in_fuel_tank'' There are no rules which conclude this (and the system doesn't already know the answer), so the system will ask the user:

Is it true that there's petrol in the fuel tank?

Let's say that the answer is yes. This answer would be recorded, so that the user doesn't get asked the same question again. Anyway, the systom now has proved that the engine is getting petrol, so now wants to find out if the engine turns over. As the system doesn't yet know whether this is the case, and as there are no rules which conclude this, the user will be asked:

Is it true that the engine turns over?

Lets say this time the answer is no. There are no other rules which can be used to prove ``problem_with_spark_plugs'' so the system will conclude that this is not the solution to the problem, and will consider the next hypothesis: problem_with_battery. It is true that the engine does not turn over (the user has just said that), so all it has to prove is that the lights don't come one. It will ask the user

Is it true that the lights come on?

Suppose the answer is no. It has now proved that the problem is with the battery. Some systems might stop there, but usually there might be more than one solution, (e.g., more than one fault with the car), or it will be uncertain which of various solutions is the right one. So usually all hypotheses are considered. It will try to prove ``problem_with_starter'', but given the existing data (the lights come on) the proof will fail, so the system will conclude that the problem is with the battery. A complete interaction with our very simple system might be:

System: Is it true that there's petrol in the fuel tank?
User: Yes.
System: Is it true that the engine turns over?
User: No.
System Is it true that the lights come on?
User: No.
System: I conclude that there is a problem with battery.

Note that in general, solving problems using backward chaining involves searching through all the possible ways of proving the hypothesis, systematically checking each of them. A common way of doing this search is the same as in Prolog - depth first search with backtracking. We'll discuss search in more detail in the next lecture.



Next: Explanation facilities Up: Rules and Expert Previous: Rules and Expert


alison@
Fri Aug 19 10:42:17 BST 1994