2.1  Overview of Rule Based Systems 

A Short History of Rule Based Systems

 

Lack of Consistent Business Focus

 

Poor Performance

 

High Complexity

 

High Cost of Entry

 

Lack of Standards

 

Islands of Automation

 

 


Rule-based systems started out as toy systems in the 1980s, became prototypes of business systems in the 1990s and then, after great struggle, the foundation of large-scale transaction processing applications over the next decade  It took close to 20 years  .  One reason for their late adoption was a lack of a consistent business focus and rigorous methodology.  Another reason was performance - they could run very very slowly on pre-Pentium and early-Pentium computers.  

Another reason, and maybe a more fundamental reason, was the sheer complexity of building and maintaining a large rule-based system.  A rule base of three or four  thousand rules required a massive, highly-trained staff to maintain, a hundred people or more in the case of the Digital Equipment product configurator, XCON

Another problem was that early implementations of rule based systems became too tied down with a particular technology.  The cost of entry into a new computer language environment ( such has LISP or Scheme ) was very high.  The alternative was to use a commercial "expert system" packages, in a time when standards for exchange of rule-base information were non-existent. 

In either case, the company was locked into proprietary technology and packages, often within a workstation-only technical infrastructure .  This situation was not unique to rule based systems - there were no standards for  metadata interchange between data modeling and CASE tools until the 1990s.  And even if a company succeeded in overcoming the barriers to implementing a useful "expert system", it was often doomed to remain an island of automation in the corporate firmament.

In short, the entire approach was too technical, too demanding, too 'computery' for practical use.  

 


 

 

Why the Business Rules Approach Succeeded

 

Why Business Rules May Not Be As Useful for the Semantic Web

 

New Definitions ?

 

Returning to an Older Definition

 

It's About Reasoning, Not About Computers

 

 

 

 

 

 

The business rules approach succeeded in changing the focus from knowledge-intensive business processes, such as product configuration, to enterprise-level business rules, effectively rising above the parochialism of early implementations.  In fact, business rules technology grew in tandem with CASE and modeling tools extended for modeling business rules.  The UML modeling standard and Rational Rose modeling tools are prime examples. 

But, the types of situations which will be encountered in the building of a semantic web may not be as well-defined as those encountered in business processes.  The shadow of incomplete, inconsistent and outright unreliable information looms on every corner of Web.  How can a set of well-defined rules deal with ill-defined information ?

Can new definitions for rules, rule base and rule engine be found which will be sufficiently precise to meet the level of exactitude demanded by computer applications and still be fuzzy enough to capture all the nuances of inexact reasoning ?  I think the answer is, maybe. 

In fact, by lumping inexact reasoning in our definition, we are returning to an earlier, more inclusive definition of rule based systems as describing how people tend to reason in different situations, probably more similar to 'cognitive science' as currently defined.  For example, an expanded definition would include the tricky subjects of truth maintenance and belief revision, subject well outside the realm of business rules or classical expert systems.  

This entire section contains very few mentions of computers or discussions about about computer implementations or, even worse, about the 'best' computer languages.  The following sections are far more concerned with the subject of how people reason about the world and things in the world than it is about computers.  I think that 'reasoning' is the proper level of inquiry for 'rule based systems', rather than focusing too much on the technical considerations which too often come to dominate and obscure the underlying issue of how people do what they do when solving common problems.             

 


A Broader Definition of the Word "Rule"

 

Inexact Reasoning

 

Associations and Associative Networks as Rules

 

Knowledge Based Systems

 

Knowledge Technology

 

The definition of 'rule' can be extended beyond sense of 'exact reasoning' implicit in the business rules definition of the word.  A 'rule' in the larger sense could be more than an exact expression of business logic, it could also be a expression of inexact reasoning, such as is a judgment about taking an umbrella along or leaving it behind.  Potentially, the decision could include inexact criteria, such as the decision whether to bring along an umbrella for a morning walk on a misty  fog-shrouded beach.  Of course, the correct answer is "no", for me anyway.    

A broader definition of 'rule' can extend well beyond the narrow sense of deductive systems encountered in rule-based "expert" systems and their kin.  This broader definition of rules and rule-based technology includes inexact reasoning based on associations inferred between the subjects of a rule.  Inference by association uses the inductive and abductive modes of inference and a different set of inference engines, such as associative networks, fuzzy logic, 'case-based reasoning' or any other inferential tools that work by association rather than deductive logic.

In this context, the definition of 'rule based systems' is similar to 'knowledge based systems', if more focused on logic inference and less abstract in its application than KBS.  In fact, the Wikipedia may have a better name for it than either 'knowledge based systems' or a 'rule based systems', that is knowledge technology.

Knowledge technology is one [ concept ] that adds a layer of intelligence to information technology, to filter appropriate information and deliver it when it is needed.

The term knowledge technologies refers to a fuzzy set of tools including languages and software enabling better representation, organization and exchange of information and knowledge ...

Among knowledge technologies are ontologies, topic maps, blogs, groupware, document management, expertise locators, latent semantic analysis, semantic networks, social networking engines, and wikis.

This sounds very close to the 'broader' definition outlined above.    


More Definitions ...

 
Four Components of a Simple Rule Based System

 

Links to the Subjects

The technical foundations of rule based systems rest based largely on "object" and "expert systems" technology developed in the late 1980 and early 90s.  It is a very big and complex topic, among several dozen major subjects, such as monitoring, planning  and diagnostic systems, natural language analysis, machine learning, etc.    

Below are four subject areas of rule-based system which seem to be 1 ) powerful enough to support a simple 'web rules' methodology and toolset and 2 ) simple enough for someone who has life interests outside of computer programming to manage small sets of 'web rules' within a semantic or 'knowledge' web ( whatever those ill-defined terms may turn out to mean in the future ).

 

1 - Logical Inference - Deductive, Inductive and Abductive inference.   

2 - Conceptual Modeling - Qualitative and Commonsense Reasoning, Analytic Frameworks and Design Patterns.

3 - Rule Base Design and Management - Classifying Rules, Rule Structure, Managing Rules.

4 - Workflow Modeling - Special issues for rule based systems concerning surrounding workflows and temporal logic.

 

 


 

About the Four Subject Areas

 

Just Enough Detail

 

Irreducible Complexity

 

The Fuzziness of Natural Language and Its Perils

 

Principles versus Pragmatics

 

 

Each of the four subjects is worthy of many articles and FAQ sheets.  The objective is to cover each area in just enough detail to give the reader a sense of how the tools might be used to perform practical, knowledge-intensive tasks and at the same time avoid excessive complexity in the subject matter.

However, the subject of 'logical inference' seems to posses a core of irreducible, even irrepressible complexity.  Even when it looks simple on the surface, there will be important details that will give the person using inference tools pause for deeper consideration and second thoughts.

The source of the difficulty seems to be in how we use language in a natural, everyday context.  Mapping everyday language into the confines of a conceptual framework is bound to be a complex topic in the best of "well-defined" circumstances, but in an inherently inexact situation, the result can only be more complex and more likely to be erroneous. 

The same problem of fuzzy language is encountered in the subject of workflows, maybe more critically than in area logical inference since workflows are actually doing things and consequently have the capability to do things wrong in a potentially disastrous and costly way.

So, obviously there must be a continuous balancing act between principles and pragmatics to achieve a workable, "adequately-defined" solution to the problems raised by inexact reasoning.

 


  

        

>>>
Logical Inference