Rule Base Systems and the Semantic Web
Do Rule Based Systems Have Any Relevance for the Semantic Web ?

 

 

 

 

Note that this section is in progress and will change often throughout early and mid 2007.  

Does the past 20 years of history in developing rule based systems have any relevance to the Semantic Web.  Certainly, it will be relevant for the development of inference engines and such, but otherwise it is difficult to say what the impact will be.  

It may boil down to a question of markets and demand.  The specific nature of those markets and demands will be explored the coming months.


 

Possible Lessons of Rule Based Systems and Business Rules  It may be true that many of the features that led to the "AI Winter" and the "Business Rule Spring" are now at work in the slow emergence of the Semantic Web.  Among the reasons for the "AI Winter" which may be relevant to the Semantic Web are: 

1 - Lack of practical focus  

2- High complexity

3- High cost of entry, steep learning curves

The issue of lacking standards which plagued early rule based systems is probably not a factor in the Semantic Web.  The highly connect nature of the Web also tends to negate the 'islands of automation' problem encounter in early rule based systems.   


 

The Misuse of Meta-Rules

 

Meta-Rules Should Add Information

 

Meta-Rules Should Not Be Required to Get a Rule Base to Work Correctly

In a previous section, the evils of rule behavior were outlined.  

Yet another source of rule behavior is what is called 'meta-rules', that is rule stating how other rules should be used or evaluated.  A prime example is rules which have 'confidence factors' attached to them, usually a number form 1 to 10 representing the level of confidence one feels about the validity of the rule.

For example, if a rule base were trying to classify an unknown animal, the fact that it has feathers would be weighted with a high confidence factor, since feathers are a distinctive feature of birds.  However, there are many cases where confidence factors are very hard to determine - the exact value might well depend on other factors and be hard to isolate.  Worse still, when combining the weights of confidence factors to obtain an overall measure of confidence, any uncertainties or errors get multiplied along with the confidence factors.           

Consequently, almost any 'rule behavior' encountered in designing a rule base, whether encountered as a bug or a feature of the rule base, it is likely to be awkward to solve.  Many rule based systems take the approach of allowing (or even requiring ) the user to specify the order of precedence for the rules, for instance Rule A is to be considered first and Rule B next, etc. 

Meta-rules can be very useful if they provide additional information about the rule base, such as detecting inconsistent use of test conditions ( for instance, when a value must be both more than and less than another value ) or incomplete coverage of derived values ( no solution when value A is greater than B ) and many other possible error conditions in the rule base.  On the other hand, meta-rules should not be required to get the rule base to work correctly at all.  Interplay between rules and meta-rules can produce extremely complex behaviors, with no real gain in the expressiveness of the rule base itself.

And, in fact, the misuse of weights and confidence factors in meta-rules is very similar to abductive inference, one of the most powerful mechanisms for associating the characteristics of different things.                                


How Useful Is Abduction In Automation ?

 

The Need for Accuracy and Exactitude in Using Machinery

 

Can Instructions to Machines Be Inexact Enough ?  

 

 

Abduction has potential application to automation because it can alerts us to similarities between situations and things that may be vitally important in avoiding threats to our prosperity or survival, such as automated diagnosis of medical conditions or automated discovery of legal precedents.  It may do so rapidly and with better result in the long term than the more restricted and exacting deductive mode of reasoning. 

However, it can often lead to ' false positives' where more exact reasoning is required, such as the cognitive tasks of deciding someone's medical tests indicate a serious medical condition or whether a legal defense should be base on such-and-such precedent.  In other words the difference between raising an alert and an actually making a decision.   Machines can not actually 'decide' anything, if deciding means reaching any decision other than that dictated by logic.  

And, in a sense, that is also the way it should be.  Imagine a car that interpreted the movement of the steering wheel as an inexact, fuzzy instructions to turn vaguely to the left or right   - a right turn might mean a little bit to the right or way off to the right.  The car would immediately wind up in the repair shop or on the junk pile.  It would be unreliable and inconsistent, too inexact for the purpose.

The phrase "too inexact for the purpose" leaves open the possibility of a facility that is 'inexact enough' for the purpose, assuming that the consequences of an inexact judgment are not injurious or life-threatening.  This might be regarded as defensive reasoning, for example a car that refuses to back up if it senses an obstacle behind it.  There might be valid reasons for a machine to exercise some degree of fuzziness in executing instructions.  In fact, many of the safety features in modern machinery do exactly that.

That said, there is still a strong demand for some minimal level of exactness in any real-world situation, no matter how incomplete and inconsistent the 'facts' may be.  A set of inexact rules that only gives the correct answer 50% of the time is marginally useful for almost any purpose.  Abduction may be best suited raising alarms and directing attention to some possible problems or inconsistencies, helping to enhance the quality of information acquired from questionable sources on the Web and improve the accuracy of decisions.                                  


Plug and Play ?

 

 

Will the technology of the Semantic Web ever be plug and play ?  Will someone with no previous knowledge be able to use the advanced technology of the Semantic Web to make important decisions ?  For example, would some one without any experience of insurance policies be able to compare the terms and fine print of two policies to see which one has better coverage in the event of a disaster ? 

At this point, it's difficult to see how that would be accomplished.  A more realistic objective is to address the needs of people who are currently performing complex queries on the Web and doing complex analysis of insurance policies, to name only one of thousands of possible subject areas on the Web.


 

<<<
Semantic Services

        

>>>
Examples