Daniel Poulin[1],Pierre St-Vincent[2] and Paul Bratley[2]
[1]Centre de recherche en droit public
[2]Département d’informatique et de recherche opérationnelle
C.P. 6128, Succursale A
MONTRÉAL (Québec),
Canada H3C 3J7
Abstract. In most expert systems, efforts are made to keep the rules in the knowledge base free from contradictions., because a logical system that generates even a single contradiction may collapse. We argue that not only can contradictions be tolerated, but in fact they are useful.
An excellent test of an argument is to compare it with the best argument for the opposing view. Accordingly, we propose a multilevel architecture, where the object level may include contradictory rules, and the metalevels resolve these conflicts. Once such an architecture exists, there are advantages to allowing contradictions in wider contexts. In a legal context we may want to examine both sides of an argument. In administrative applications, we may need systems that `look over a clerk’s shoulder’ to check that he is following one of several plausible, but not necessarily compatible, approaches. We are currently implementing these ideas.
1 Introduction
In most existing designs for expert systems, contradictions are avoided like the plague. The reasons for this are not far to seek. On the one hand, it may be feared that a system supposed to offer advice will only confuse the user if its recommendations are not clear and unambiguous. On the other, contradictions cause classical logic to collapse: once a contradiction has been derived from a theory (a set of axioms), then any formula whatsoever can also be derived, and there is no longer any meaningful distinction between theorems and non-theorems. Since expert systems typically employ some variant of classical logic as their inference mechanism, they are subject to the same limitation: if the knowledge base contains rules that lead directly or indirectly to a contradiction, then the system can justify giving any answer whatsoever.
The influence of this attitude can be traced in other areas of artificial intelligence, too. Such names as truth-maintenance systems, or belief-revision systems, strongly suggest that there exists an single, incontrovertibly true state of the world, and that it is the job of the system to find this true state and to ignore any competing interpretations of reality. Even modal systems using a “possible worlds” semantics do not admit direct contradiction: while they may concede that, for some proposition P, possibly P is true and possibly P is false, they do not allow both P and not-P to be asserted in the same world.
In this paper we argue that not merely is it unnecessary to avoid contradictions, but that including them in the rule-base of an expert system can be positively advantageous. In section 2 we outline our reasons for believing this, and in section 3 we outline how a system including contradictions might be implemented. Once such an architecture is in place, we can see further advantages to be gained; sections 4 and 5 describe some of these, and section 6 sums up.
2 Contradictions and Confirmation
In many aspects of everyday life, when time and cost do not preclude it, the best way to take a complex decision is to take advice from several sources and to compare the different arguments that are offered. One `expert’ may have a better reputation or a more competent air than another. Among the defensible conclusions that may be reached, some may rely on rules more universally accepted, or more commonly applied, than others. It is therefore essential to judge the force of the different arguments, and perhaps to order them accordingly.
Suppose you are about to buy a new car, but that you don’t want anything too expensive. Being a cautious sort of soul, you consult two friends that you consider to be experts in the area. The first gives you a rule that can be summarized as
if a car costs more than 40,000$, then it is too expensive,
while the second proposes the rule
if a car has a high resale value, then it is not too expensive.
In your opinion, your friends are both reliable judges, so you incorporate both these rules into your decision procedure. What should you do when, after a visit to your local Mercedes dealer, you realize that both the rules apply in this case?
What you clearly should not do is to argue that since the rules furnished by your experts lead to a contradiction, you are entitled to conclude anything you want (as you could if you relied on classical logic), so you might as well go out and buy that 1935 Hispano-Suiza you have always longed for. Contradictions in classical logic lead the system to collapse, but in everyday life this is not the case. We all manage to live with contradictory beliefs and conflicting tastes without this leading to a breakdown of reason.
It is more sensible temporarily to ignore one of the contradictory rules, and see where that leads you, and then to ignore the other, and follow that argument to its logical conclusion. If you are lucky you may find that in both cases the system suggests the same conclusion: the question of expense may not be the decisive factor in your choice, so that preferring one rule or the other is irrelevant. If on the other hand choosing one rule leads to one conclusion, while choosing the other leads to a different answer, you will be obliged to opt for one conclusion or the other. However this may be easier than choosing between the two original rules.
There are several reasons for this. For example, if you are faced with two coherent, complete arguments leading to different conclusions, it may be that either choice is good, and that a distinction must be made on the basis of relatively minor criteria that you had not thought worthwhile to include in your system. If this is the case, then you can accord whatever weight you now choose to these minor criteria, confident that whichever decision you take can be justified. On the other hand it may be that when you look at the details of the two conflicting arguments, you see that one of them, although derived in accordance with the rules in your system, seems shakier than the other: perhaps at some stage a rule is applied in an inappropriate context, or perhaps some arbitrary cut-off point seems to carry too much weight.
In such a situation the presence of contradictory rules in the system has actually led you to make a better decision. Instead of relying blindly on the rules in the knowledge base, you have in effect used the system to obtain the best argument in favour of one conclusion, and then to obtain the best argument in favour of a different conclusion, before looking at both these arguments in detail and making an overall judgement of their value. Our contention is that contradiction, used in this way, is not a fault in a system, but rather a positive quality to be exploited.
We believe that producing contradictory or conflicting arguments is a better way of allowing the user to make intelligent decisions than attempting to attach certainty measures, probabilities, or some other form of weight to rules in the knowledge base. It is notorious that system designers have great difficulty in assigning such weights in such a way that they are useful and uncontroversial. It is equally clear that even an experienced user can make little of a recommendation along the lines “You should buy a Toyota with certainty 0.73.” To be told, “You should probably buy a Toyota, but on the other hand you might just prefer a Mercedes. Here are the arguments in favour of each of those choices,” conveys much more information.
In short, we believe that an expert system that, except in clearcut cases, provides only one conclusion or only one point of view is not as useful as one that routinely sets out alternative conclusions, or that gives the best arguments both in favour of and against some proposed course of action. The problem now, of course, is how to implement such a system.
3 Implementation
To implement an expert system capable of handling such considerations, we propose using a multilevel architecture, where basic knowledge is supplied at the object level, and the inference mechanism is controlled at one or more metalevels. This kind of architecture is called `subtask-management’ in [van Harmelen 89].
At the lower, or object level, basic knowledge supplied in the customary way by one or more experts in the appropriate area is expressed using rules. The rule base can be structured in any suitable way; for instance, maintenance may be easier if all the rules supplied by one particular expert are kept together, or alternatively it may be more convenient to group rules concerning the same topics. In areas such as the law, where texts have authority simply by virtue of their existence, isomorphic methods can be used to formalize knowledge [Sergot 86; Bench-Capon 87, 92].
Where we differ from the usual approach, however, is that we take into account the possibility that different experts may have different opinions, so we make no attempt to enforce consistency among the rules. Thus for any particular topic, there may be several corresponding rules in the knowledge base. These object level rules are augmented with attributes representing their relationships and allowing the metalevel to manipulate them. For each rule, these attributes include four lists of identifiers: of directly contradictory rules, of opposed rules, of supporting rules, and of alternate rules. These lists can be constructed during compilation of the knowledge base. Other attributes may be added in the future.
At the object level, the system works in the usual way. Inferences can be made either by forward or backward chaining, depending whether the call from the metalevel is trying to deduce all the consequences of a given situation, or to establish some particular conclusion. Above the object level, we propose using one or more metalevels to take account of inferential, procedural, and general knowledge.
3.1 Procedural
Giving good advice to a user requires more than the ability to produce one or more acceptable arguments. An expert system must also include representations of procedural and methodological knowledge related to particular situations. For instance, a human expert knows that he must first determine whether the current situation falls into some particular class, whether some unusual exception must be considered in this case, and so on. Knowledge of this type, intimately related to the skills of the human expert, reduces the search space, and helps organise the dialogue with the user. However it cannot conveniently be incorporated within the object level rules. The structure of the object level rules can only provide local guidance as to how these rules should be used.
Several advantages arise when procedural knowledge is separated from the representation of the rules. Aiello [Aiello 88, pp. 246-247] notes that if the way the rule base is to be applied is encoded implicitly at the object level, then this single use is the only one possible. It is preferable to maintain the generality of the rule base. However `general’ rules cannot easily be used in practice unless the system incorporates procedural knowledge. A rule base free from control information is easier, too, to modify than one where other considerations, such as control of the inference mechanism, complicate the rules. Moreover the separation of procedural knowledge brings other benefits. For instance, the object level rules can be verified and tuned more easily if the user does not have to consider `control.’ Also, differing procedural rules, operating on the same object rules, can be used to implement different tasks.
3.2 Inferential
Much as for procedural knowledge, we propose to use metaknowledge to represent the different strategies of argument that can be based on the rules depending on the point of view of the user. In a knowledge base purged of conflicting rules, such `points of view’ would be unthinkable, and whatever line of argument the user wished to adopt, the result would be the same. In the system we propose, the presence of alternative rules allows different inferences starting from the same facts. It is at the metalevel that this flexibility is invoked, giving more or less weight to the various principles involved. Although two parties to a dispute may found their arguments on essentially the same rules, with only a limited number of differences of emphasis, there is frequently a considerable difference in the conclusions they reach. Even if one party is convinced he is right, he may still be interested to know what arguments can be advanced against his position, to prepare answers in advance. The rules of inference at the metalevel must implement a similar strategy for using the object level knowledge.
For an expert, the ambiguity inherent in many rules poses no serious problem. An expert system must also be capable of giving a plain answer based on the most straightforward reading of the situation. Even with a knowledge base incorporating contradictory rules, such plain answers must be possible. Thus the system must know which rules represent on the face of things the most evident items of expertise, so as to give the `straightforward’ answer when required. Using the attributes mentioned earlier, the system will favour the initial use of rules that are not contradicted. It may also accumulate different lines of reasoning supporting one particular conclusion, whether final or intermediate. When no single, obvious answer can be found, it must also be able to build alternative chains of reasoning, sorted according to their persuasive strength.
Finally, it must be able to produce an argument supporting a predetermined point of view. In this context, the metarules guide hypothetical arguments. At each choice point, where it is possible to derive contradictory conclusions and where backtracking will most often occur, they should choose the direction that best promises to achieve the required goal. They will also be used to produce the best possible argument against a given conclusion, for it is often insufficient to know that an argument can be constructed showing that X is A; it is also necessary to ensure that no better, contradictory arguments showing that X is not A are available. Alternatively, if such arguments do exist, then the user had better be aware of them. To achieve this, the system must identify points in an argument where a different course might have been taken, and indicate the consequences of following the alternative path. The metalevel will use the attribute of the object level rules that contains the identifiers of contradictory rules. With such a mechanism, the system can produce alternative inferences leading to different conclusions.
In this connection there is an interesting analogy between the logician’s `closed world assumption’ [Reiter 78] and what lawyers call `the burden of proof’. In many current systems, it is assumed that an individual mentioned in the knowledge base does not have some particular property (is not a student, say, to use one familiar example), unless the contrary is explicitly stated or is derivable by the rules. In many situations one side has the burden of proof: in the criminal courts, for example, one is not guilty unless this can be explicitly proved. As a less dramatic example, an investment adviser selling mutual funds must show why it is actually better to move out of the investments the client already has. In other contexts the burden of proof may shift: when buying a car, you probably don’t need to consider pickup trucks unless there is a good argument to the contrary; once you have heard that argument, if you detest pickups, you may search for a good argument to take you back to your original position; and so on. Thus the inferential metarules in our system must be able to use the closed world assumption on different sides of an argument at different stages.
Throughout this discussion we have assumed that the system will be capable of explaining and justifying the inferences it has made in any particular case. Indeed the whole approach to confirming an argument by contradicting it requires a change in emphasis from what the system recommends to how it arrives at its conclusions. Conventional expert systems, of course, do explain their reasoning, partly for the very reasons we have set out: certainty factors and such-like devices are better than nothing for evaluating an argument, but an explanation is preferable. We believe that an explanation of the arguments both for and against a given position is better still.
3.3 General
Our proposal provides additional benefits when an attempt is made to model inherently imprecise domains such as the law, economics, financial planning, and so on. Some of the models developed in these domains can be integrated as metalevel knowledge. In legal reasoning, this metalevel knowledge may prove useful to structure valid legal arguments.
A system such as the one we propose would have little chance of success were everyday reasoning as complicated and subtle as proving complicated theorems in mathematics. However it is increasingly accepted that while everyday arguments are, generally speaking, quite broadly based, with a wide choice of rules that can be applied, they are on the other hand not usually deep, and conclusions are reached after the application of a relatively small number of rules. When buying a car, you are more likely to consider several arguments, each quite short, than to construct an elaborate logical edifice depending on a long chain of reasoning. Thus for many purposes it is feasible to use inference mechanisms that might prove impossible to apply in a field characterized by longer chains of reasoning. We believe that, besides legal reasoning, there are many other areas where the chains of reasoning are wide and short: sociology, economics, and the political sciences are examples.
4 Contradiction and Legal Expert Systems
Our team in the Faculty of Law at the University of Montréal is engaged, among other things, in the design of expert systems to give advice to lawyers, clients, and officials in such areas as unemployment insurance, consumer rights, and so on. Legal reasoning, not merely the law itself, has been an object of study for centuries, and there are few fields where the participants are so acutely concerned not merely with the conclusions of their arguments, but with the methods of argument involved. An expert system which, when presented with a set of facts, replied simply, “Your client is right: go to court,” or “Your client is wrong: take no action,” without being able to explain how the conclusion was reached, would be even less acceptable in the legal domain than elsewhere.
Furthermore lawyers are famous–notorious even–for their ability to take both (or several) sides of an argument. An advocate’s job is to defend his clients’ interests. To this end he requires advice not on how to attain some abstract notion of justice, but how, within the limits set by professional ethics, to defend one particular point of view. If he is wise, he also takes advice about how his opponent is likely to defend a contradictory position, so as to be ready to refute this counterargument. Thus a useful legal expert system must be able to find and explain arguments both for and against any given position: in other words, it must be able to reason in a context where contradictions are the rule rather than the exception. Indeed, Ashley [Ashley 90] used case-based reasoning to argue with legal precedents. This work was extended to a combination of case-based and rule-based reasoning in CABARET [Rissland 91]. The problem they tackled was to decide upon the meaning of `open-textures’, which are hard to define concepts.
It is almost universally accepted by jurists that not all the law applicable in a particular area is found in the source texts. Lawyers use not merely the text of the law, but also rules of interpretation: general principles concerning what is and is not legal, common-sense concepts, and so on. Depending on the rules of interpretation, different, and often contradictory, meanings may be ascribed to the same legal text. In some cases all, or almost all, these principles may indicate that some particular interpretation is to be preferred; but there are also situations where different principles lead to different results [Côté 90].
In a legal expert system, situations of the first type, where one interpretation seems indisputably preferable, are relatively easy to handle. Situations of the second type, where contradictions arise, create difficulties. Many lawyers believe these conflicts can be solved using metarules of interpretation [Wróblewski 83; MacCormick 91].
An adequate legal expert system must be able to handle arguments based on differing, possibly contradictory, interpretations both of the law and of the facts in any particular case. If different arguments lead to different conclusions, then the system must be able to discover at least the most plausible reasoning that could be employed by each side. It must therefore neither eliminate contradictions from its knowledge base, nor collapse into incoherence when a contradiction is encountered.
This point is underlined in [Bench-Capon 88], where an example is given of two conflicting formalizations of the British Nationality Act. For Bench-Capon and Sergot, as for us, “the requirement for conflicting rules, which argue both for and against a conclusion, is essential” [op. cit., page 53]. However, beyond noting that “the implementation problems that arise from this are severe”, they give no hint how this is to be achieved.
We propose using the metareasoning architecture described in section 3 to implement a legal reasoning system that can handle the adversarial dimension of legal reasoning and the contradictory interpretations. This last topic has recently become a subject of interest. The use of metaknowledge has been proposed to resolve rule conflicts in legal expert systems. Hamfelt [Hamfelt 89; 92] uses metareasoning, but does so to derive more specific rules of law from general ones, this process being controlled by the judgment of the user. It remains to be seen if suitable general rules can be produced. Breuker and den Haan [Breuker 91], Mariani and her team [Guidotti 92] and Yoshino and Kakuta [Yoshino 92] implement priority criteria between legal norms using metareasoning.
5 Advise or Consent?
An expert system that can function in the presence of contradictory rules offers still more possibilities. We envisage, for example, using such a system not to give advice, but rather to give a stamp of approval to a series of decisions, as Bench-Capon suggests [Bench-Capon 87].
Imagine a civil servant working in a government office, filling out a form or taking details for a client’s file. The kind of task we have in mind is not totally routine, but may require that the clerk take decisions about how to classify the case, what actions to set in train, and so on. It is normal for a computer system to check the plausibility of raw data entered in a particular field of a form. What we propose now, however, is that the system should use the rules in its knowledge base to “look over the clerk’s shoulder” and verify that the decisions made are coherent and sensible.
For the level of task we envisage, there may be no simple right or wrong answer to every question that comes up, so the user cannot be constrained to follow a single, undeviating line. At some point a choice may have to be made that leads to divergent, and maybe indeed conflicting, courses of action. Within limits, the system cannot simply decide that the user is wrong, and constrain him or her to act correctly: some decisions may be too subtle for the machine. However it is not unreasonable to expect the system to verify that the decisions made in the course of entering the data, and the conclusion finally reached by the clerk, correspond to an argument that could be made using at least one interpretation of the facts.
To do this, it is once again necessary that the knowledge base of the system hold conflicting or contradictory rules. As before, these can be held in the object level of the knowledge base, while the metarules for verifying consistency of the argument reside in the metalevels. The metarules can be made very tight when the user is to be allowed little discretion in the procedures to be followed, while more tolerant metarules can be used as looser checks on the activities of experienced or more professional users. Bourcier [Bourcier 92, p. 44] has suggested the use in the legal context of a hierarchy of `lawyer’s assistants’ with differing levels of skill and responsibility. The architecture we propose is a step towards achieving this.
If such an architecture is to be used to implement an approval system, then it should respond in `real-time’ to the actions of the clerk. A batch system, working on idle time, would be unsatisfactory. These time considerations could force us to consider using a `bounded rationality’ scheme [Russell 91], where the quality of the solution is limited by the available computing time. It should also be able to accept `special cases’ if the user so wishes, and refer them to the appropriate authority. Finally, in case of refusal, it must justify its decision so the user can identify his mistake.
6 Conclusion
We have proposed an architecture for expert systems that allows contradictory rules to be kept in the knowledge base, using metarules to ensure that any arguments produced are nevertheless coherent. We believe that producing arguments both for and against any given recommendation is a better way of estimating the strength of a case than simply stating one side of the question.
A prototype to test our ideas is currently being implemented at the Université de Montréal. This preliminary version uses an object level and a single metalevel, which is used to implement the different types of metaknowledge. The prototype is written in Prolog, which, while not ideal, is sufficient to allow experiment. For later versions, after we have gained some experience, we believe it will be worthwhile to implement a special language for the metalevel, and possibly to increase the expressiveness of the object level. Sources of inspiration for a specialized language abound: LLD (the Language for Legal Discourse) for the object level [McCarty 89], and the ideas of Bowen [Bowen 85] and Qu-Prolog [Cheng 91] for the meta-level, to cite only a few.
Although much remains to be done, results so far suggest that there is no serious difficulty in principle in implementing what we now call `a contradiction management system.’
Acknowledgements
The work described here is supported by grants from the Social Sciences and Humanities Research Council of Canada, and from Quebec’s Fonds FCAR.
References
- [Aiello 88]
- Aiello, L., Levi, G., “The uses of metaknowledge in AI systems”, in Meta-level architectures and reflection, eds P. Maes, D. Nardi, North-Holland, 1988
- [Bench-Capon 87]
- Bench-Capon, T.J.M., Robinson, G.O., Routen, T.W., Sergot, M.J., “Logic Programming for Large Scale Applications in Law: A Formalization of Supplementary Benefits Legislation”, in Proceedings of the First International Conference on Artificial Intelligence and Law; Northeastern University, Boston, ACM Press, 1987
- [Bench-Capon 88]
- Bench-Capon, T.J.M, Sergot, M., “Toward a Rule-Based Representation of Open Texture in Law”, in Computer Power and Legal Language, ed. C. Walter, Quorum Books, New York, 1988
- [Bench-Capon 92]
- Bench-Capon, T. J.M., Coenen, F., “Isomorphism and Legal Knowledge Based Systems”, Artificial Intelligence and Law ,1 (1), 1992
- [Bourcier 92]
- Bourcier, D.,”De la règle de droit à la base de règles. Comment modéliser la décision juridique?”, Actes du Séminaire Sciences du texte juridique, Far Hills, Québec, 1992, to appear in : Sciences du texte juridique, Yvon Blais, Cowansville, 1993
- [Bowen 85]
- Bowen, K.A., “Meta-level Programming and Knowledge Representation”, New Generation Computing, 3, 1985
- [Breuker 91]
- Breuker, J., den Haan, N., “Separating world and regulation knowledge: where is the logic ?”, in Proceedings of the Third International Conference on Artificial Intelligence and Law, St. Catherine’s College, Oxford, ACM Press, 1991
- [Cheng 91]
- Cheng, A.S.K., Robinson, P.J., Staples, J., “Higher Level Meta Programming in Qu-Prolog 3.0”, in Proceedings of the 8th International Conference on Logic Programming, ed. K. Furukawa, MIT Press, 1991
- [Côté 90]
- Côté, P-A, Interprétation des lois, 2ième édition, Yvon Blais, Cowansville, 1990.
- [Guidotti 92]
- Guidotti, P., Mariani, P., Sardu, G., Tiscornia, D., “Meta-level Reasoning – The Design of a System to Handle Legal Knowledge Bases”, in Settimo Convegno Sulla Programmazione Logica, GULP 92, Tremezzo, 1992
- [Hamfelt 90]
- Hamfelt, A., Building Modular Legal Knowledge Systems. The Multilevel Structure of Legal Knowledge and its Representation, The Swedish Institute of Law and Informatics Research, IRI-Rapport 1990:2, 1990.
- [Hamfelt 92]
- Hamfelt, A., “Metalogic Representation of Multilayered Knowledge”, PhD Thesis, Uppsala Theses in Computer Science 15, Uppsala University, 1992
- [Lenat 83]
- Lenat, D. et al., “Reasoning about Reasoning”, in Building Expert Systems, eds F. Hayes-Roth, D. A. Waterman, D. B. Lenat, Addison-Wesley, 1983
- [MacCormick 91]
- MacCormick, D.N., Summers, R.S. Interpreting Statutes: A Comparative Study, Dartmouth Publishing, Aldershot, 1991
- [McCarty 89]
- McCarthy, L.T., “A Language for Legal Discourse: 1 – Basic Features”, in Proceedings of the Second International Conference on AI and Law, UBC, Vancouver, ACM Press, 1989
- [Reiter 78]
- Reiter, R., “On Closed World Data Bases”, in Readings in Artificial Intelligence, eds B.L. Webber, N.J. Nilsson, Morgan Kaufmann, 1981
- [Routen 91]
- Routen, T.W., Bench-Capon, T.J.M., “Hierarchical Formalizations”, International Journal of Man-Machine Studies, 35, 1991
- [Russell 91]
- Russell, S., Wefald, E., “Principles of metareasoning”, Artificial Intelligence, 49, 1991
- [Sergot 86]
- Sergot, M.J., Fariba, S., Kowalski, R.A., Kriwaczek, F., Hammond, P., Cory, H.T.. “The British Nationality Act as a Logic Program”. Communications of ACM, 29(5), 1986
- [van Harmelen 89]
- van Harmelen, F., “A Classification of Meta-level Architectures”, in Meta-Programming in Logic Programming, eds H. Abramson, M.H. Rogers, MIT Press, 1989
- [Wróblewski 83]
- Wróblewski, J. “Paradigms of Justifying Legal Decisions”, in Theory of Legal Science, Proceedings of the Conference on Legal Theory and Philosophy of Science, Lund, Sweden, December 1983
- [Yoshino 92]
- Yoshino, H., Kakuta, T., “The Knowledge Representation of Legal Expert System LES-3.3 with Legal Meta-inference”, in Proceedings of the 6th International Symposium on Legal Knowledge and Legal Reasoning Systems, Tokyo, October 1992
Source: http://www.egov.ufsc.br/portal/sites/default/files/anexos/3060-3054-1-PB.htm