The Inference Effort

In order to respond to a query, the inference engine takes the facts together with the terminology and axioms of the ontology. Fensel et al. [2000] describe the steps which are necessary to respond to a query as follows: "First the inference engine translates Frame logic into Predicate logic and second it translates Predicate logic into Horn logic via Lloyd-Topor transformation [Lloyd & Topor, 1984]." This translation process is summarised in Figure 27.

Figure 27 - Stages and Languages used in the Inference Engine [Fensel et al. 2000]

The result of this transformation process is a normal logic programme. The last stage is the bottom-up fixpoint evaluation procedure. This is achieved by the use of standard techniques from deductive databases. Because negation is allowed in the clause body, the appropriate semantics and evaluation procedure have to be carefully selected. If the resulting programme is stratified, simple stratified semantics are used and evaluated with a technique called dynamic filtering (cf. [Kifer & Lozinskii, 1986; Angele, 1993]). "In order to deal with non stratified negations we have adopted the well-founded model semantics [Van Gelder et al. 1991] and compute this semantics with an extension of dynamic filtering" [Fensel et al. 2000].

In order for Ontobroker to be capable of solving very flexible queries such as "which attributes has a class", the entire knowledge is represented by only a few predicates. This small number of predicates and the well-founded model semantics raise severe efficiency problems [Fensel et al. 2000]. As an example from the case study, the query to find an expert, triggered in the BT Expertise map (see 5.2.4), needs 10-20 seconds to be evaluated. This period of time could be considered as being acceptable if the user is convinced that Ontobroker returns more intelligent results and therefore saves its user a lot of time in another place. But if the amount of facts in the knowledge base will increase over time from now 81.000 facts (the information from all projects from one year are now included) to at least 400.000 facts (when we take the project information of the last 5 years and the fact that more HR will be filled out into consideration), then it is doubtful whether the waiting time will still be accepted by the user.

To tackle this performance problem Staab et al. [2000] have shown several possible strategies (to overcome the performance barrier):

1. The inference engine could be configured to deliver parts of the answers that are then already displayed to the user while the inference engine calculates the rest of the list. Thus answers which do not need to be derived by rules could be presented immediately and the user can start working on these results while waiting for the rest.

2. The inference engine could cache all facts and intermediate facts derived from earlier queries. Thus, the answering process optimises itself over time as queries that build on previously derived facts become faster over time.

3. The inference engine could be split into several inference engines that execute parts of the query in parallel on different processors. Every inference engine would be responsible for a subset of the rules and facts. A master engine would be responsible for the co-ordination.

0 0

Post a comment