"I think so..."
"I'm almost certain..."
"I'd almost always do the following..."
How can a rule-based expert system reason with inputs like these? To see what the inference engine needs to do, consider the values in this rule from the auto diagnosis demonstration knowledge base that might be represented with some degree of uncertainty:
If the result of trying the starter is the car cranks normally
and a gas smell is not present while trying the starter Then the gas tank is empty with 90% confidence |
You might not be sure about the values of the premise attributes. Suppose you are 95% certain that the car cranks normally when you try the starter and 90% certain that a gas smell is not present while it's cranking. How confident should the inference engine be that the overall premise is true? Is this confidence level high enough to fire the rule? | |
If the rule fires, concluding that the gas tank is empty, how should the uncertainty in the premise be combined with the 90% confidence in the rule's conclusion? | |
Suppose you already have information that the starter cranks normally with 50% confidence, and now receive additional input that the starter cranks normally with 95% confidence. What is your overall confidence in this fact? |