index
int64
0
18.8k
text
stringlengths
0
826k
year
stringclasses
38 values
No
stringlengths
1
4
200
AN INTELLIGENT AID FOR CIRCUIT REDESIGN Tom M. Mitchell, Louis I. Steinberg, Smadar Kedar-Cabelli Van E. Kelly, Jeffrey Shulman, Timothy Weinrich Department of Computer Science Rutgers University New Brunswick, NJ 08903 Abstract Digital circuit redesign is a task that requires knowledge of CirCUit structure, function, and purpose, and of the interrelationships among these We describe a knowledge-based system, REDESIGN, which assists In the redesign of digital circuits to meet altered functlonal specifications. REDESIGN assists the user in focusing on an appropriate portion of the circuit, generating possible local changes within the circuit, ranking these possible changes, and detecting undesirable side-effects of redesigns. lt provides this assistance by combining two modes of reasoning about circuits: (I) causal reasoning involving analysis of circuit operation, and (2) reasoning about the purposes, or roles, of various circuit modules within the larger circuit. We describe these two modes of reasoning, and the way in which they are combined by REDESIGN to provide aid In circuit redesign. I Introduction The AI/VLSI group at Rutgers is exploring Artificial Intelligence approaches to a new generation of design and debugging alds for digital circuits. This work has led to a general exploration of methods for representing and reasoning about complex artifacts, and about the interrelationships among their purpose, function, and structure. Over the past few years we have developed a prototype intelligent assistant (called REDESIGN) for the functional redesign of digital TTL circuits This paper reports on the REDESIGN system, its capabillties for representing and reasoning about digital circuits, and its use of these capabilities to assist in circuit redesign. A. The Problem In the functional redesign problem the system is given the schematic of a working digital circuit (e.g., a computer terminal), and its functional specifications (e.g, the fact that it displays 80 characters per line, 25 lines per screen, displays the cursor at a programmable address, etc.). The system is also given a data structure called a design plan, which relates the circuit schematic to its specifications. Given a desired change to the functional specifications (e.g., require that the terminal display 72 characters per line), the task is then to redesign the circuit so that it will meet these altered specifications. *This material is based on work supported by the Defense Advanced Research Projects Agency under Research Contract NO00 14-8 1 -K-0394. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied of the Defense Advanced Research Projects Agency or the U.S. Government The formulation of the design problem presented here is very similar to planning problems in the Al literature, and the issues addressed in this work are related to those addressed by others working in the areas of planning and design, such as C2, 4, 6, 7, 8, 9, 101 Our work IS also related to that of [I], which deals with recognlzlng circuits rather than designing them, and which addresses the relations among circuit function, structure, and purpose Design and redesign are closely related problems In addressing redesign rather than design, we have focused more on how to describe usefully the interconnected subgoals and constraints which characterize the solution to a design problem, and less on control of search In design One view of functional redesign IS that it is a form of analogical problem solving. In particular, the original circuit provides a solution to the problem of implementing the original specifications. Given a new set of speciflcatlons which is nearly identical, the hope is to implement these specifications in a closely analogous fashion (i.e, by making only minor changes to the original circuit schematlc) The key to using the analogy effectively lies largely in havtng recorded the essentials of the original solution In a fashion that will allow determtning which portlons can be reused, and which changes will lead to undesirable InteractIons among subgoal solutions The next section discusses the representation of circuits, and the notions of circuit behavior and specifications The subsequent section describes the two modes of reasoning about circuits employed by REDESIGN causal reasoning and reasoning about purpose We then illustrate the use of these modes of reasoning by REDESIGN, by tracing its use for a specific redesign problem II Representing Circuits, Behaviors and Specifications The structure of a circuit is represented by a network of modules and data-paths A module represents either a Single Component or a cluster of components being viewed as a single functional block. Similarly, a data-path represents either a wire or a group of wires. The data flowing on a data-path is represented by a data-stream, and the operation performed by a module is represented by a module function. c3, 51 These representations are described in One aspect of this circuit representation that has been important in REDESIGN IS that data-streams represent the entire time history of data values on a data-path, rather than a single value at a single time, as in many circuit simulators. This has proven to allow considerable flexibility in reasoning about circuit behavior over time. From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. In reasoning about redesign, REDESIGN must distinguish between what happens to be true of the circuit (we refer to this as the circuit behavior), and what must be true for that circuit to work correctiy (we refer to this as the circuit specifications) Therefore, for each module function and datastream, both behavior and specifications are recorded. For example, the behavior of a particular module may state that its output will be the sum of its inputs, delayed by 100 nanoseconds, while the specifications for that module may simply require that the output be delayed by less that 500 nanoseconds While these definitions look straightforward, the notion operation based on a causal model of the circuit, and one to reason about the purposes of circuit submodules (i.e their roles in implementing the global circuit specifications) These two modes of reasoning are combined to provide assistance at various stages of the redesign process. A. Causal Reasoning Causal reasoning answers questions such as “If input X is supplied to the circuit module, what will the output be?” and “If output Y is desired, what must be provided as inputs to the module>“, where X and Y are complete of a specification must be defined more precisely. It is useful to think of the specification of a module as giving the range within which the behavior of that module can be altered without making the circuit as a whole malfunction. However, the range of acceptable behaviors depends upon what else in the circuit is allowed to change One could define a module’s specifications as the range of acceptable behaviors, assuming the specifications of all other modules remain unchanged, but allowing other modules to have any behavior within their own specifications We term this kind of specification an s-specification. Or, one could define a module’s specification as the range of acceptable behaviors. assuming every other behavior In the circuit remains fixed. We term this kind of specification a b-specification. Notice that the s-specifications of a module will always malntalns a Dependency Network that records, for each be at least as restrictive as its b-specifications (since s- specification, both its source and the path in the circuit specifications are based on weaker assumptions regarding through which It was propagated See C3, 51 for more the surrounding circuitry). Thus, it is possible for a information on CRITTER. component to -violate Its s-specifications ibut not its b- specifications), and for the clrcult as a whoie to still operate correctly. While top down design systems typically must deal with s-specif Ications (since b-specifications are not defined untii the final circuit implementation IS known), in REDESIGN we have found it most useful to record b- specifications. This is because in considering possible changes to an individual module within a completed design we make the default assumption that the rest of the circuit, and hence the rest of the behaviors, will remain unchanged. Of course, if changes are made in two modules then one must keep track of how changes in each one affect the b- specifications of the other B. Reasoning about Purpose A second kind of reasoning important in redesign concerns the roles, or purposes, of various circuit modules in implementing the overall circuit specif lcations Questions of this sort that arise during redesign include “What IS the purpose of circuit module MT” and “How are the circuit specifications decomposed into subspecifications to be implemented by separate sections of the hardware?” Questions of this sort can be answered by REDESIGN, by examining the Design Plan of the circuit The Design Plan is a data structure that shows how circuit specifications are decomposed and implemented in the circuit, as well as the conflicts and subgoals that arise during design. It contains enough informatton to allow “replaying” the original design, and is characterized in terms of a set of imp/ementation rules that embody In executable form general knowledge about circuit design tactics This Design Plan must be provided to REDESIGN, as part of the characterizatron of the circuit which is to be redesigned Ill Two Modes of Reasoning about Circuits A variety of types of questions arise when redesigning a circuit REDESIGN uses two separate modes of reasoning to answer these questions -- one to analyze circuit 1 In order to illustrate the form of the Design Plan, Slice Indicss consider the simple Character Generator Module (CGM) circuit shown in figure 3- I, This circuit is similar to a /II\ standard circuit used In most videc computer terminals I LATCH - ROM - SHIFT -- It is the part of the terminal that translates the ASCII - 74175 - 6574 - REGISTER character codes into the corresponding dot matrix to be - 74166 displayed on the screen. Characters This circuit accepts as input (1) a stream of ASCII encoded Characters, (2) a stream of binary encoded integers, called Slice- Indices that specify which horizontal slice of the character dot matrix is to be I displayed, and I (3) several clock signals used for synchronization The circuit must produce a stream of Timing 2 Signals Character-Slices, each of which is a bit str:ng corresponding to the dots to be displayed on the terminal screen for the selected horizontal slice of the input Character. Figure III -1: The Character Generator Module 275 The heart of the CGM design is a read-only memory, the ROM6574. This ROM6574 stores the definition of the character font (the dot matrix to be displayed for each character), one Character-Slice per byte of memory. To retrieve the Character-Slice corresponding to a given Character and Slice-index, the ASCII code for the character is concatenated with the binary representation of the Slice- Index, and used to address the ROM6574. The other components in this circuit are used to interface the ROM6574 to the desired input and output formats. For example, the CGM specifications require serial output while ROMs produce parallel output. Therefore, a shift register (SHIFT-REGISTER-74 166) is used to convert the output data to serial. Also, because the address inputs to the ROM6475 must be stable for at least 500 nsec. while the input Characters are stable for only 300 nsec., a latch (LATCH741 75) is used to capture the input Characters, and hold these data values stable for an acceptable duration. The above paragraph summarizes the purpose of each circuit component and the conflicts and subgoals that appear during design. This is precisely the kind of summary that must be captured in the Design Plan, in order to allow the REDESIGN program to reason effectively about the design and about the purposes of individual circuit components. Figure Ill-2 illustrates the Design Plan used to describe the CGM circuit to REDESIGN. Each node in the Design Plan corresponds to some abstracted circuit module whose implementation is described by the hierarchy below it. The topmost node in this Design Plan represents the entire CGM, and its functional specifications. The bottom most nodes in the Design Plan represent individual components in the circuit. Each solid vertical link between modules in the Design Plan corresponds to some implementation choice in the design, and is associated with some general Implementation rule which, when executed, could recreate this rmplementation step. For example, the vertical link ieadlng down from the topmost module in the figure represents the decision to use a Read-Only Memory (ROM) to implement the CGM. This Implementation choice is associated with the implementation rule which states “IF the goal is to implement some finite mapping between input and output data values, then use a ROM whose contents store the desired mapping” (note this leaves open the choice of the exact type of ROM.) Each dashed link in the Design Plan represents a conflict arising from some implementation choice or choices, and leads to a design subgoal, represented by a new circuit module with appropriate specifications. For example, a conflict follows from the implement&on choice to use a ROM, and leads to the subgoal module labelled “Parallel-to-Serial-Subgoal”. The conflict in this case is the discrepancy between the known output signal format of ROMs (i.e., parallel) and the required output signal format of the CGM (i.e., serial). The specifications of the new subgoal module are therefore to convert the parallel signal to serial. In a similar fashion, the implementation choice to use the specific ROM6574 leads to another conflict, and to the resulting subgoal to extend the duration of the input data elements. By examining the Design Plan of a circuit, REDESIGN is able to reason about purposes of various circuit modules, and about the way in which the circuit specifications are implemented. The general implementation rules used to summarize the design choices can be used to “replay” the Design Plan for the similar circuit specifications, and thus allow for a straiahtforward kind of desian bv analoav. IV Redesigning a Circuit This section illustrates the use of both causal reasoning and reasoning about purpose In redesigning a circuit. It traces the actions of the REDESIGN program as it took part in a particular redesign of the Video Output Clrcutt WOC) of a computer terminal The Video Output Circuit (whlch contains the Character Generator Module discussed earlier) is shown In figure 4- 1. It is the part of the computer terminal that produces the composite video informatton to be displayed on the terminal screen. It produces this output from its combined inputs, which include the characters to be displayed, the cursor position, synchronization information for blanking the perimeter of the terminal screen, and special display commands (e.g., to blink a particular character). In this example, we consider redesigning the VOC to display characters in an italics font rather than Its current font. Given a redesign problem, REDESIGN guides the user through the following sequence of five subtasks (1 J focus on an appropriate portion of the circuit, (2) generate redesign options to the levei of proposed specifications for individual modules, (3) rank the generated options, (4) implement the selected redesign option, and (5) detect and repair side effects resulting from the redesign. A more complete trace and discussion of this example IS given In [51. Focus attention on appropriate section(s) of the circuit. In many cases, the most difficult step in functional redesign is determining which portions of the circuit should be ignored. Focusing on relevant details In one locality of the circuit while ignoring irrelevant details In other localltles can greatly simplify the complexity of redesign. In order to determine an appropriate focus, REDESIGN “replays” the Design Plan by reinvoking the recorded implementation rules with the changed circuit specif Ications. During this replay process, whenever an abstract circuit module is produced by some implementation step, its purpose is compared with the purpose of the corresponding module in the original Design Plan. If the purpose is unchanged, then the original implementation of this module will be reused without Characters Slice-Indices Timing Signals USE USE 6574 SHIFT REGISTER 74166 $ + f Characters LATCH ROM SHIFT Character 74175 - 6574 - REGISTER - Slices 74166 Slice Indices A Figure 111-2: Design Plan for the CGM 276 Figure IV-l: change in the new design**. If the new module has a different purpose than the corresponding module in the old Design Plan, (e.g., the new CGM must implement a different character font), an attempt is still made to apply the same implementation rule as in the original design (e.g., still try to use a ROM). If this tmplementation rule IS not useful in the new design (as with the rule that suggests using the specific ROM6574). then REDESIGN stops expanding this portion of the Design Plan, and marks the corresponding portion of the circuit as a portion to be focused on for further redesign The use of the Design Plan as sketched above leads in the current example to a focus on redesigning the abstract ROM module within the CGM within the VOC circuit. This abstract ROM module is implemented in the current circuit by two components as shown in figure Ill-2 (the ROM6574 and LATCH741 75). A second method of focusing is possible, by using the Dependency Network produced by CRITTER. This method involves isolating those points in the circuit that possess specifications derived from the changed specification on the cutput datastream The resulting focus is generally broader than that determined from the Design Plan, because out of the many places in the circuit that can impact any given output specification, only a small proportion of these involve circuitry whose main purpose is to implement that specification. Generate redesign options to the level of proposed specifications for individual circuit modules. Once an initial focus for the redesign has been determlned, redesign options are generated which recommend either altering the specifications of individual modules, or adding new modules with stated specifications. In both cases, only the new functional specificattons are determined at this point -- the circuitry to implement these specifications IS determined later The constraint propagation capabilities of CRITTER provide the basis for generating these redesign options In the current example, once REDESIGN has focused on the section of the voc including the ROM6574 and LATCH74175, it considers the new output specification for this circuit segment, and propagates it back through this segment. Before each propagation step, REDESIGN considers the option of breaking the wire at that point and inserting a module to transform the values on that Jvlre to values satisfying the required specif icatlon. In addition, it Bv I CBIxNK Video Output Circuit considers the option of altering the module lmmedlately upstream, so that it will provide the required signal at that point For each of the generated optlons, the new functtonal specifications are defined In terms of (1) the new specification to be achieved. and (21 a ilst of unchanged specifications found In the origlnal Dependency Network, which are to be maintained In the current example, the option generation process produces a list of five candidate redesign options This list includes redesign options such as “replace the ROM6574 by a module which stores the new character font”, and “introduce a new mcdule at the output of the ROM6574, which will transform the output values into the desired font” (these options are described by the program in a formal notation, and the above are only English summaries). Rank the generated redesign options. Heuristics for ranking redesign options can be based on a variety of concerns: ( 1) the estimated difficulty of implementing the redesign option (e.g., components with zero delay cannot be built), (2) the likely Impact of the implemented redesign on global criteria such as power consumption and layout area, and (3) the likelihood and severity of side effects that might be associated with the redesign*%. In the current example, the heuristic that selects the appropriate redesign option suggests “Favor those redesign options that replace existing modules whose purpose has changed.” In this case, since the purpose of the ROM6574 has changed, the option of replacing this component is recommended. The recorded Dependency Network and Design Plan also provide very useful informatron for estimating the relative **One must still make certain that changes elsewhere in the design do not Interact dangerously with the implementation of this module In REDESIGN, this is accompllshed without having to directly examlnlng the implementation of the module. Instead. design changes elsewhere In the circuit are checked for consistency with the constraints recorded in the Dependency Network produced by CRITTER. ***The current REDESIGN system has only a prlmltive set of heurlstlcs for ranking redesign optlons 277 severity of various changes to the circuit. Because the Design Plan shows the dependencies among implementation decisions (e.g., the purpose for the LATCH74 175 is derived from the decision to use the specific ROM74 175) it provides a basis for ordering the importance of components and associated constraints in the overall design (e.g., if the ROM6574 is removed, the LATCH75 174 may no longer have a purpose for existing). This ordering of circuit modules, and of the datastream constraints that they impose, provides an important basis for estimating the relative extent of side effects associated with their change. Implement the selected redesign option. The above steps translate the original redesign request into some set of more local (and hopefully simpler) specification changes. While the implementation rules that REDESIGN possesses can be used for design****, we have not focused on automating this step. Thus, the user is left to implement the redesign option. Detect and Repair Side Effects Arising from the Redesign Once the redesigned circuit is produced, REDESIGN checks the new circuit segment to try to determine (a) that it does achieve the desired new purpose, and (b) that it does not lead to undesirable side effects Undesirable side effects are detected as violations of the Dependency Network specifications at the inputs and outputs of the altered circuit segment. If a specification is violated, the new circuitry might be redesigned, or the specification might itself be modified or removed by redesigning a different portion of the circuit. The Dependency Network can be examined to determine the source of the violated specification, and to determine the locus of circuit points at which the specification could be altered. V Summary REDESIGN is a research prototype system that demonstrates the feasibility of providing intelligent aids for redesign and design of digital circuits. It provides aid in focusing attention on an appropriate portion of the crrcuit, in generating and ranking redesign options, and in monitoring and manipulating the many constraints involved in making a design work. While the current REDESIGN system has many limitations (e.g., in the size of circuits it can handle, its Inability to help with certatn classes of redesigns, shortcomings of its causal reasoning methods, incompleteness of its knowledge base of implementation rules, etc.) the basic representations and approaches to reasoning appear useful. Several aspects of our approach have contributed to the success of REDESIGN. The most apparent of these is the combined use of reasoning about causality in the circurt, and reasoning about the purposes of parts of the circuit. There are also some important aspects to how REDESIGN reasons about causality and purpose. ln reasoning about causality, REDESIGN describes both the behavior and the specifications for a data stream, in a way that allows it to describe entire histories, not just data stream values at particular time instants. REDESIGN can ---- ****We have recently begun an effort to build a VLSI Design Consultant system which uses similar rules for automated design. propagate these descriptions through the circuit, to build a Dependency Network showing how the specifications for each data stream are derived from the behaviors of the modules and the specifrcatlons for the circuit as a whole In reasoning about purposes, we have viewed the original design process essentially as a planning problem, with subgoals derived both from the decomposition of parent goals and from conflicts between other subgoals The Design Plan provides REDESIGN with an explicit summary of this planning process, with detail enough to replay the process, and to examine the particular relationships among design goals and subgoals. Cl1 I21 II31 c41 II51 II61 171 C81 CSI Cl01 References de Kleer, Johan. Casual And Teleological Reasoning in Circuit Recognition, Ph.D. dissertation, Massachusetts Institute Technology, January 1979. Green, C., et al., “Research on Knowledge-Based Programming and Algorithm Design”, Research Report KES.U.8 1.2, Kestrel Institute, September 1982. Kelly, V., Steinberg, L., “The CRITTER System: Analyzing Digital Circuits by Propagating Behaviors and Specifications,” Proceedings of the National Conference on Artificial / ntel / igence, August 1982, pp 284-289, Also Rutgers Computer Science Department Technical Report LCSR-TR-30, and Re-Design Project Working Paper #6 J. McDermott, “Domain Knowledge and the Design Process,” Proceedings of the 78th Design Automation Conference, IEEE, Nashville, 198 1. Mitchell, T., Steinberg, L., Kedar-Cabelli, S., Kelly, V., Shulman, J., and Weinrich, T., “REDESIGN. A Knowledge-Based System for Circuit Redesign”, Technical Report DCS-TR, Rutgers Univ., April 1983 Mostow, D.J., and Lam, M., “Transformational VLSI Design. A Progress Report”, Technical Report, USC- ISI, November 1982. Rich, Charles; Shrobe, Howard E.; Waters, Richard C., “Computer Aided Evolutionary Destgn For Software Engineering”, Al Memo 506, Massachusetts Institute Technology, January 1979 Stef ik, Mark Jeffrey, Planning With Constraints, Ph.D dissertation, Stanford University, January 1980 Sussman, Gerald Jay, Holloway, Jack, Knight, Jr, Thomas F., “Computer Aided Evolutionary Design For Digital Integrated Systems”, Al Memo 526, Massachusetts Institute Technology, May 1979 Wile, David S., “Program Developments as Formal Objects”, Technical Reoort, Inf ormatlon Sciences Institute, July 198 1
1983
10
201
A NEW INFERENCE METEIOD FOR FRAME-BASED EXPERT James A. Reggia, Dana S. Nau, and Pearl Y. Department of Computer Science University of Maryland College Park, MD 20742 SYSTEMS Wang ABSTRACT This paper introduces a new frame-based model of diagnostic reasoning which is based on a generalization of the classic set covering problem in mathematics. The model directly handles multiple simultaneous disorders, it can be formalized, it is intuitively plausible, it provides an approach to partial matching, and it is justifiable in terms of past empirical studies of human diagnostic reasoning. We are using this model as an inference method in diagnostic expert systems, and contrast it with the inference methods used in previous similar systems. DIAGNOSTIC PROBLEM SOLVING A diagnostic problem is a problem where one iS given a set of abnormal findings (manifestations) for some system, and must explain why those findings are present. Diagnostic problems are common, occurring in medicine, software debugging, automotive repair, electronic circuit fault localization, etc. Search methods, statistical pattern classification, and rule-based deduction face significant limitations when applied to such problems [Reggia, 19821. Recently a variety of inference methods which model the hypothesize-and-test process involved in human diagnostic reasoning have been proposed, especially in medicine (e.g.) [Aikins, 1980; Mittal et al, 1979; Pauker, 1976; Miller et al, 1982; Pople, 1977; Patil et al, 19811). While these models have produced impressive performance at times, they currently face a number of limitations when applied to real-world problems [Reggia, 19821. For example, problems where multiple disorders are present simultaneously have proven very difficult to handle [Pople, 19771. In addition, AI models of diagnostic reasoning are often criticized as being "ad hoc" by individuals outside of AI because of the absence of a formal, domain-independent theoretical foundation (e-g., [Ben-Bassat et al, 19801). Acknowledgement: Supported by NINCDS through grants 5 K07 NS 00348 and 1 PO1 NS 16332. Dr. Nau was supported in part by NSF grant MCS81-17391. Computer time was provided in part by the Computer Science Center of the University of Maryland. This paper introduces a new description-based (frame-based) model of diagnostic reasoning which is founded on a generalization of the set covering problem. This model, which we call the "generalized set covering" or GSC model, is of interest for several reasons. It directly addresses the problem of multiple simultaneous disorders, it provides a basis for a formal theory of diagnostic inference, and it provides an approach to such issues as partial match and inference in the context of incomplete problem data. The GSC model is summarized here informally, and further details and example applications are available in [Reggia, 1981; Reggia et al, 19831. We have already used this model to implement both medical and non-medical expert systems. We view our work as an effort to bring mathematical rigor to an area of AI where it has previously been relatively lacking, and as an attempt to create an abstraction of expert system implementations in the sense that Nilsson has recommended [Nilsson, 19801. ___________--------------------------------------- BASILAR MIGRAINE [DESCRIPTION: AGE = FROM 20 THRU 30 <H>, 30 THRU 50 <L>, 50 THRU 110 <N>; DIZZINESS [TYPE = VERTIGO <H>, REST <L>; COURSE = EPISODIC [EPISODE DURATION = MINUTES <L>, HOURS <H>, DAYS <L>], ACUTE AND PERSISTENT]; HEAD PAIN <A> [LOCATION = 0ccwITAL <H>, REST <L>]; NEUROLOGICAL SYMPTOMS = TINNITUS <M>, DIPLOPIA [DURATION = TRANSIENT DURING DIZZINESS <A>], ;YN;OP;; NEUROLOGICAL EXAM FINDINGS = HOMONYMOUS FIELD CUT [DURATION = TRANSIENT DURING DIZZINESS], &S'FI;DINGS [TYPE = NON-SPECIFIC <H>, REST <L>; DURATION = TRANSIENT DURING DIZZINESS] 1 ____________________------------------------------ Figure 1: A DESCRIPTION for BASILAR MIGRAINE. ____________________------------------------------ 333 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. KNOWLEDGE REPRESENTATION The basic unit of associative knowledge used by the GSC model is the frame-like DESCRIPTION. For each possible causative disorder in the domain of a knowledge base there is a corresponding DESCRIPTION. Figure 1 illustrates a DESCRIPTION for the disorder BASILAR MIGRAINE from the knowledge base of a diagnostic expert system dealing with the problem of dizziness [War+, 19821 e Letters in angular brackets represent subjective indications of frequency (A = always, H = high, M = medium, L = low, N = never). Figure 1 means: "Basilar migraine usually occurs in individuals from 20 to 30 years old, but many occur up to age 50. If a person is over 50, basilar migraine can be categorically discarded as a possible etiological factor. Basilar migraine causes dizziness which is usually of a vertiginous nature and occurs either in an episodic or an acute and persistent fashion. When episodic, the dizziness usually lasts for hours but may last for minutes or days. Headache, usually in an occipital location, Is always present. Neurological symptoms caused by basilar migraine are . . .". In the current dizziness knowledge base there are 50 disorders like basilar migraine. The key point is that each disorder has an associated DESCRIPTION that specifies, among other things, all manifestations caused by the disorder. GENERALIZED SET COVERING AS A MODEL OF DIAGNOSTIC INFERFiNCE The GSC model provides a useful method for making diagnostic inferences from DESCRIPTIONS without the use of production rules. In the GSC model the underlying knowledge for a diagnostic problem is viewed as pictured in Figure 2a. There are two disjoint finite sets which define the scope of diagnostic problems: D, representing all possible disorders di that can occur, and M, representing all possible manifestations rn. that may occur when one or more disorder: are present. For example, in medicine, D might represent all known diseases (or some relevant subset of all diseases), and M would then represent all possible symptoms, examination findings, and abnormal laboratory results that can be caused by diseases in D. To capture the intuitive notion of causation, we assume knowledge of a relation C 2 D x M, where <di, mj> E C represents "di can cause mj." Note that <di, mj> E. C does not imply that mj necessarily occurs when di is present, but only that m. I! may be caused by di. Given D, M, and C, the fo lowing sets can be defined: man(di) = {mjl<di, mj> E Cl $rdi E D, and causes(mj) = {dil<di, mj> E Cl fTmj E M. a> b) --------------------__I_________________---------- Figure 2: Organization of diagnostic knowledge (a) and problems (b). ------_------------_------------------------------ These sets are depicted in Figure 2a, and represent all possible manifestations caused by di, and all possible disorders that cause ma, respectively. These concepts are intuitive y i! familiar to the human diagnostician. For example, medical textbooks frequently have descriptions of diseases which include, among other facts, the set man(di) for each disease di. As noted earlier, the DESCRIPTION of BASILAR MIGRAINE in Figure 1 explicitly defined man(BASILAR MIGRAINE). In addition, physicians often refer to the "differential diagnosis" of a symptom, which corresponds to the set causes(mj). Clearly, if man(di) is known for every disorder di, then the causal relation C is completely determined. We will use man(D) = u man(di) to indicate all JieD possible manifestations of a set of disorders D, and causes(M) = V causes(mj) to indicate all mitM possible causes of any manifestation in M. Finally, there is a distinguished set M+ c M which represents those manifestations which are known to be present (see Figure 2b). Whereas D, M, and C are general knowledge about a class of diagnostic problems, M+ represents the manifestations occurring in a specific case. Using this terminology, we define a diagnostic problem P to be a 4-tuple <D,M,C,M+> where these comDonents are as described above. We assume that man(di) and causes(m.) are always non- empty sets. We now turn to defiling a solution to a diagnostic problem by first introducing the concept of explanation. Definition: For any diagnostic problem P, E 2 D is an explanation for M+ if (i> M+ 2 man(E), or in words: E covers +; and (ii) [El 5 IDI for any other cover D of M', i.e., E is minimal. This definition captures what one intuitively means by "explaining" the presence of ‘a set of manifestations, Part (i) specifies the reasonable constraint that a set of disorders E must be able to cause all known manifestations M+ in order to 334 be considered manifestations. an explanation for those Part (ii) specifies that E must generators are sufficient to represent the solution to that problem: {dl d2) X Ed7 d8 d9] and id3 d4] x @,I. The second generator represents two explanations {d3 d8] and {d4 d8], while the first generator represents the other six explanations in the solution. Cenerators are usually a more compact form of the explanations present in the solution, they are a convenient representation for developing algorithms to process explanations sequentially (see below), and they are closer to the way the human diagnostician organizes the possibilities during problem solving (i.e., the "differential diagnosis"). also be one of the small est sets to do so, reflecting the Principle of Parsimony or Ockham's Razor: the simplest explanation is the preferable one. This principle is generally accepted as valid by human diagnosticians. Here, we have equated "simplicity" with minimal cardinality, reflecting an underlying assumption that the occurrence of one disorder di is independent of the occurrence of another. An explanation is a generalization of the concept of a minimal set cover [Edwards, 19621. One difference from the traditional set cover problem in mathematics is that when Mf # M, man(E) may be a superset of M+. This difference, reflects the fact that sometimes when a disorder In adapting the GSC model for use in a real- world expert system several issues were addressed and resolved. One of these issues is the fact that diagnostic problem solving is inherently sequential in nature. The human diagnostician usually begins knowing only that one or a few manifestations are present, and must actively seek further information about others. is present not all of its manifestations occur. With these concepts in mind, we can now define the solution to a diagnostic problem P, designated Sol(P) to be explanations for M'. the set of all Thus, solving a diagnostic problem in the GSC model involves a second generalization of the traditional set covering problem: we are interested in finding all explanations rather than a single minimal cover. This sequential diagnostic process can be captured in terms of the GSC model, and represents a third generalization of the traditional set covering problem. The tentative hypothesis at any point during problem solving is defined to be the solution for those manifestations already known to be present, assuming, perhaps falsely, that no additional manifestations will be subsequently discovered. To construct and maintain a tentative hypothesis like this, three data structures prove useful: Example: Let P = <D,W,C,M+> where D = m...,dg), M = {ml,...mg], and man(d$) are as specified in Table 1. Note that Table 1 implicitly defines the relation C, because C = { <di,mj> 1 mj E man(di) for some di). Let M+ = {ml,w+,q~~ No single disorder can cover (account for) all of disorders do cover M+. M+, but some pairs of {dl,d7) then M+ c man(D). For instance, if D = Since there are no covers for M+ of-smaller cardinality than D, it follows that D is an explanation for M+. Careful MANIFS: the set of manifestations present so far; to be SCOPE: causes(MANIFS), the set of all diseases di for which at least one manifestation is already known to be present; and examination that of Table should convince the reader Sol(P) = { {dl d7) id1 d8) {dl dg) {d2 d7) {d2 d$ id2 dgl {d3 d8) td4 d8) } FOCUS: the tenta tive solution for just mani festat ions already in MANIFS; those FOCUS is gene represen rators. as a collection of is the set of all explanations for M+. These data structures are manipulated as follows: 5!i man(d;) ii man(d+) dl ml m4 d6 m2 m3 d2 ml “3 m4 d7 d3 m2 m5 ml m3 d8 m4 “5 m6 d4 ml m6 d9 m2 m5 d5 “2 m3 m4 ---------------------------------------------- - Table 1: Knowledge about a class of diagnostic problems (C is implicitly defined by this table). ------------------^------------------------------- (1) (2) Get the next manifes tation ma. Retr ieve causes(m;) from theJknowledge base. (3) MANIFS 4- MANIFS U (m.}. (4) SCOPE c- SCOPE (5) Adjust FOCUS to accomoda: m.. Ca?seS(Illj). (6) Repeat this process untJi1 no further manifestations remain. Rather than representing the solution to a Thus, as each manifestation rn. that is present is discovered, MANIFS is update?! simply by adding m. to it. any poss ii SCOPE is augmented to include le causes di of mj which are not already contained in it. Finally, FOCUS is adjusted to accomodate m. based partially on intersecting CaUSeS(Illj) w th 2 the sets of disorders in the existing generators [Reggia et al, 19831. These latter operations are done such that any diagnostic problem as an possible explanations for explicit M+ , it is list of all advantageous to represent it as a collection of explanation generators. A generator is analogous to a Cartesian set product, the difference being that the generator produces unordered sets rather than ordered tuples. the example diagnostic To illustrate this idea, consider problem above. Two explanation which can no longer account for the augmented MANIFS (which now includes mj) are eliminated. Figure 3 illustrates this algorithm with a "trace" based on the earlier example. DISCUSSION This paper has proposed the construction and maintenance of generalized minimal set covers ("explanations") as a model of diagnostic reasoning and as a method for diagnostic expert systems. The GSC model is attractive for several reasons: it directly handles multiple simultaneous disorders, it can be formalized, it is intuitively plausible, it provides an approach to partial matching, and it is justifiable in terms of past empirical studies of diagnostic reasoning (e.g., [Elstein et al, 1978; Kassiner et al, 19781). To our knowledge the analogy between the classic set covering problem and general diagnostic reasoning has not previously been examined in detail, although some related work has been done (e*g., assignment of I-LA specificities to antisera, see Nau et al, 1978; Woodbury et al, 19791). As noted earlier, other aspects of the GSC model relevant to expert systems, such as question generation, termination criteria, ranking of competing disorders, and problem decomposition are discussed elsewhere [Reggia et al, 1983 and 19841. The GSC model provides a useful context in which to view past work on diagnostic expert systems. In contrast to the GSC model, most diagnostic expert systems that use hypothesize- and-test inference mechanisms or which might reasonably be considered as models of diagnostic reasoning depend heavily upon the use of production rules (e.g., [Aikins, 1980; Mittal et al, 1979; Pauker et al, 19761). These systems use a hypothesis-driven approach to guide the invocation of rules which in turn modify the hypothesis. Rules have long been criticized as a representation of diagnostic knowledge [Reggia, 19821, and their invocation to make deductions or perform actions does not capture in a general sense such intuitively attractive concepts as coverage, minimality, or explanation. Perhaps the previous diagnostic expert system whose inference method is closest to the GSC model is INTERNIST [Miller et al, 19821. INTERNIST represents diagnostic knowledge in a DESCRIPTION- like fashion and does not rely on production rules to guide its hypothesize-and-test process. In contrast to the GSC model, however, it uses a heuristic scoring procedure to guide the construction and modification of its hypothesis. This process is essentially "depth first," unlike the "breadth first" approach implied in the GSC model. INTERNIST first tries to establish one disorder and then proceeds to establish others. This roughly corresponds to constructing and completing a single set in a generator in the GSC model, and then later returning to construct the additional sets for the generator. INTERNIST groups together competing disorders (i.e., a set of disorders in a generator) based on a simple but clever heuristic: 'Two diseases are competitors if the items not explained by one disease are a subset of the items not explained by the other; otherwise, they are alternatives (and may possibly coexist in the patient)." [Miller et al, 19821. In the terms of the GSC model, this corresponds to stating that dl and d2 are competitors if M+- man(dl) contains While this or is contained in M+-man(d2). simple heuristic often works in constructing a differential diagnosis, we can produce examples in the context of the GSC model for which it will fail 20 correctly group competing disorders together. It is also unclear that the INTERNIST inference mechanism is *For example, suppose M+ = {m d2r l.- .rnq} and only dl, and d3 have been evoked where M A {m2 m4 m5 m6 m7 m& man(d1) = {m3 "4 "5 m6 m7 mfj), and M+ fi man(d2) = {ml m2 m 1. M+ fl man(d3) = In the GSC model, Sol(P) = { {dl d3? (d2 d3] ] which can be represented by the single generator {dl d21 x {d3] where dl and d2 are grouped together as competitors. Suppose that dl was ranked highest by the INTERNIST heuristic scoring procedure. Then M'- man(d1) = {ml m3) and M+-man(d2) = {ml m2], so INTERNIST would apparently fail to group dl and d2 together as competitors. ________________________________________------------------------------------------------------------------- Events in order of their discovery MANIFS SCOPE FOCUS Initially P, 0 B ml present {ml] Id, d2 d3 dql id1 d2 d3 dq) m2 absent II I, (1 m3 absent II ,I ,I m4 present {ml m4) {dl d2 d3 d4 d5 d8) {dl d2] m5 present (ml m4 m5) {dl d2 d3 d4 d5 d7 d8 dg) {dl d2) x id7 d8 dg) and id81 x Id3 d4) m6 absent 11 ,t II _______-___________-____________________------------------------------------------------------------------- Figure 3: Sequential problem solving using the set covering model. ______________-_________________________------------------------------------------------------------------- 336 guaranteed to always find all possible explanations for a set of manifestations. Reportedly, the "depth first" approach used in INTERNIST resulted . less than optimal performance [Miller ,:" al, 19821. Recent enhancements in INTERNIST's successor CADUCEUS attempt to overcome some of these limitations through the use of "constrictors" to delineate the top-level structure of a problem [Pople, 19771. These changes are quite distinct from the approach taken in the GSC model, but they do add a "breadth first" component to hypothesis construction. We are currently developing the GSC model in two ways: by studying its application in medical expert systems and by formally developing the mathematical theory. Currently, we have implemented two medical diagnostic expert systems based on the GSC model, one for dizziness (a difficult medical problem because of the many possible causes) and one for peroneal muscular atrophy [Reggia, 1981; Reggia et al, 19831. While the GSC model forms the central mechanism of these expert systems, the basic model was augmented in a number of ways to make it more useful for real- world problem solving. For example, the "symbolic probabilities" illustrated in Figure 1 were introduced and are used to rank competing explanations after the final FOCUS is constructed. A heuristic approach to question generation and termination was adopted. When tested on prototype cases these expert systems functioned well, but modifications to the content of the knowledge bases (not the GSC model) would be necessary before more extensive evaluation in practice using a series of real patients could be done. In parallel, we are developing the mathematical basis of the GSC model [Reggia et al, 19841. This has involved defining a variety of operations on generators and expressing formal algorithms in terms of those operations. We are proving the correctness of the these algorithms and have established criteria for decomposing diagnostic problems into independent subproblems that are easier to solve. While the GSC model as it currently exists does not address all aspects of diagnostic problem solv-lng, it does appear to provide a reasonable starting point from which to formalize the underlying abductive inference process that is involved. REFERENCES 1. 2. 3. Aikins J: Prototypes and Production Rules - A Knowledge Representation for Computer Consultations, Memo HPP-80-17, Stanford Heuristic Programming Project, 1980. Ben-Bassat M, et al: Pattern-Based Interactive Diagnosis of Multinle Disorders - the MEDAS System, IEEE Trans. Pat. Anal. Machine Intell., 2, 1980, 148-160. Edwards J: of Sets, Covers and Packings in a Family Bull. Am. Math. 494-499. Society, 68, 1962, 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. Elstein A, Shulman L, and Sprafka S: Medical Problem Solving - An Analysis of Cm Reasoning, Harvard University Press, 1978. Karp R: Reducibility Among Combinatorial Problems, in R. Miller and J. Thatcher (eds.), Complexity of Computer Computations, Plenum Press, New York, 1972, 85-103. Kassirer J and Gorry G: Clinical Problem Solving - A Behavioral Analysis, Ann. Int. Med., 89, 1978, 245-255. Miller R, Pople H, Myers J: Internist-l: An Experimental Computer-Based Diagnostic Consultant for General Internal Medicine, NEJM, 307, 1982, 468-476. Mittal S, Chandrasekaran B, and Smith J: Overview of MDX - A System for Medical Diagnosis, Proc. Third Symposium on Computer ADDlications in Medical Care. IEEE. 1979. 34- 4;. I I . Nau D, Markowsky G, Woodbury M & Amos D: A mathematical analysis of human leukocyte antigen serology, Math Biosci., 40, 1978, 243-270. Nilsson N: The interplay between experimental and theoretical methods in artificial intelligence, Cognition and Brain Theory, 4, 1980, 69-74. Patil R, Szolovits P, and Schwartz W: Causal understanding of patient illness in medical diagnosis, Proc. Seventh IJCAI, Yale University, New Haven, CT, 1981. Pauker S, et al: Towards the Simulation of Clinical-Cognition, Am. J. Med., 60, 1976, 981-996. Pople H: The Formation of Composite Hypotheses in Diagnostic Problem-Solving - An Exercise in Synthetic Reasoning, IJCAI, 5, 1977, 1030-1037. Reggia J: Knowledge-Based Decision Support Systems - Development Through KMS, TR-1121, Department of Computer Science, University of Maryland, Oct., 1981. Reggia J: Computer-Assisted Medical Decision Making, in Applications of Computers in Medicine, M. Schwartz, ed., IEEE Press, 1982, 198-213. Reggia J, Nau D, and Wang P: Diagnostic Expert Systems Based on a Set Covering Model, Int. J. Man. Machine Studies, to appear late 1983. Reggia J, Nau D, and Wang P: A formal model of diagnostic inference, 1984, manuscript in preparation. Woodbury M, Ciftan E, & Amos D: HLA serum screening based on an heuristic solution to the set cover problem, Comp. Pgm. Biomed., 9, 1979, 263-273. 337
1983
11
202
Expert System Consultation Control Strategy James Slagle and Mdur.el lhynm * Navy Center for Applied Research in Artificial Intelligence Naval Research Laboratory Washington, D. C. 20375 Abstract User interfaces to expert systems represent a bottleneck since consultation time is proportional to the amount of information the system asks the user to sup- ply. An efficient, rather than exhaustive, strategy to direct user questioning will reduce consultation time and effort. An intelligent strategy to minimize questioning, the merit system, has been successfully implemented in Battle, an expert consultant system developed for the Marine Corps. The merit strategy enables Battle to focus the consultation process on the most meritorious ques- tions allowing the military commander to respond quickly awith the most pertinent information The merit system, originally defined for logical functions in the iVultiple program, has been extended to the Mycin style of propagation and to the method of subjective Bayesian assignments used by Prospector. A procedure for merit calculations with any differentiable, real-valued ass;gn- ment function is presented. Our experience has shown that merit values provide an efficient flow of control for expert consultation, I Introduction This paper reports on the consultation control stra- tegy of a computer based intelligent decision aid system called Battle [lo], developed for the United States Marine Corps. The objective of Battle is to improve the Marine Integrated Fire and Air Support System (MIF’ASS) by providing timely recommendations for the allocation of a set of weapons to a set of targets. In a time -critical expert consultant system, the consultation must be quick yet relevant to the decision being made. Many military expert consultant systems are time-critical, for example, systems such as Battle that allocate weapons to targets. Other time-critical mil- itary systems would include systems for classifying images, submarine combat systems, multisensor infor- mation integration systems, and operational planning systems. An expert system in mineral exploration, for example Prospector [2], is not time-critical since the mineral being sought has been in the ground millions of years and will not go away soon. Some expert systems in medicine are not time-critical, !!ycin [4] for example, but an emergency system would be. When a system such as Mycin or Prospector ques- tions the user, it uses a depth-first (local) search stra- tegy. It will persist with a line of questioning that has become seemingly irrelevant. However, when a time critical expert consultant system is questioning a user it is essential that it asks questions that are highly relevant and quickly answered. Ideally it would use a best-first (global) strategy. The user of a time-critical system may know he has only five minues to make a Aho of Bloomburg University, Bloomsburg, PA aecision. He would become highly frustrated with a sys- tem asking seemingly irrelevant or time consuming questions when he knows there are better questions to be asked. When expert consultant systems have thousands rather than hundreds of rules even non-time- critical systems will need some means of asking relevant and quickly answered questions, The merit system presented in this paper allows an expert consultant sys- tem of the Battle, iVytin, or Prospector type to ask ques- tions in a best-first manner. User interfaces present a bottleneck for most expert systems; consultation time is roughly propor- tional to the number of questions directed to the user. Older consultant systems generally follow an exhaustive depth-first network traversal to direct the consultation process that could ask hundreds of questions of which a handful would be really pertinent. A system that asks only pertinent questions, however, allows substantial sav- ings of time by avoiding unnecessary questioning. The Battle decision aid [5] uses the merit system, a best-first strategy, to direct its consultation sessions efficiently. This is quite important for time critical applications such as the U.S.Marine Corps commander using the Battle weapon assignment program in combat. II Other Expert System Consultation Strategies Expert systems such as Battle, Mycin, and Prospec- tor represent knowledge as a set of propositions. Each proposition has a value representing its likelihood. A proposition may have antecedent propositions from which its value may be inferred, and may itself be an antecedent of consequent propositions. We call the numerical dependence of the value of a consequent pro- position on the values of its antecedents the assignment &n&ion of the consequent. A proposition with no conse- quents, called a top proposition, represents the result of the inferencing process. The data from which the result is calculated are represented by u&able propositions in the network, whose values may be supplied by the user. Other propositions in the network, whose values the user is unlikely to know, are unaskable. Typically top propo- sitions are unaskable. The distinction between askable and unaskable pro- positions is not always clear, since users differ in exper- tise and different information is available in different instances. An askable proposition may have antecedents and an assignment function for those cases when the user cannot supply its value. Several techniques have been adopted to try to optimize the expert consultation process within the framework of a depth-first traversal. The simpler of these methods generally eliminate questioning about any node whose final value is established. The MARK IV control strategy of Prospect07 [2] first chooses a prop* 369 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. sition for consideration whose antecedents are then evaluated by a function The MARK IV control strate,v apparently works well for the Prospectrn- system. It suffers from several shortcomings : 1. Optimization is within the framework of a depth-first traversal. A node, once traversed by the depth-first mechanism, will never be reconsidered. 2. The node selected may not be the optimal proposi- tion for consideration within the entire network. 3. The four criteria evaluated by the function do not identify the antecedent with the largest potential to produce changes in the consequent probability. The Casnet (causal-associational network) system [14] provides a more extensive search of the inference network than does the Prospector MARK IV strategy and considers costs as well. The two control strategies used by Cm-net are : 1. Selection of the node with the maximum weight-to- cost ratio, and 2. Selection of the node with the maximum weight subject to certain constraints on cost. Cusnet concentrates on nodes that seem to be most consistent with the remaining nodes in the network. Since the objective of expert consultation is to infer the value of a top proposition or propositions, a more appropriate estimate of the weight of a node can be determined by its potential infiuence on a top proposi- tion. The merit control strategy of the Battle system assigns a weight to each node corresponding to its ability to alter the value of a top proposition. III. The Multiple Contrd Strategy Multiple (MULTIpurpose Program that LEarns) has been implemented for the game of K&ah and for theorem proving [7]. With a two step algorithm, Mu.MpLe uses merit values to select the next proposition using merit values : 1. The system s5[rrous from an untried proposition with the largest merit value on its proposition tree and calculates merit values for all its children. 2. At each level only the best merit value is buck& up to the top proposition. At the top level, the untried proposition with the highest merit value is identified. Assume that there exists a proposition tree with a top proposition G, and antecedents Gi. Each Gi may have antecedents designated Gu. Each subscript indi- cates an additional level down the tree. The values stored at G, Gi, and G, are named P, Pi, and P,. Each value P is given by the assignment function f(PlJ% * * ’ Pn) applied to its antecedent values Pi. The merit of an untried proposition Gij,..et is defined as : Merit Viue of G G...st E ap I I a(Cij...d> 3.1 where ZIP is the change in the value (generally, but not restricted to, a probability) of the top proposition G, and a(Cij...st) is the cost of expanding the untried proposition Gij...d. Both positive and negative values are equally significant. The merit of proposition H is the expected ratio of two terms if H is expanded: 1. The absolute value if the change in value of the top proposition. 2. The cost of expanding H. Thus expanding a proposition with maximum merit should lead to good results. A more useful form of the application of the chain rule merit formula is /a(cyJ / =I*%$~ *. a(Pij...st) a(cij.. st> / . 3.2 The last factor, called self-merit, introduces cost con- siderations. The self-merit of proposition Gii,, st is a measure of the expected change in the proposition’s value P,.,s, with respect to the cost of considering that proposition, CV...~~. Each of the remaining factors in the chain rule expansion is called an edge-merit. It measures the change in the value of a consequent propo- sition due to the change in the value of an antecedent proposition. An edge-merit value for a specific antecedent/consequent pair may be calculated by evaluating the derivative of the assignment function associated with the edge linking that pair. Multiple algorithm merit calculations require time proportional to tree-depth since the merits of only the newly sprouted propositions need to be computed and backed up. Merit calculation is completely analogous to moving up a tree of winners. IV Merit In Au Inference Network The most meritorious propositions in a network are those propositions that are likely to have the most cost- effective influence on a top proposition. Using the I&.-Jti- pie algorithm, Battle explores the most meritorious pro- positions until it encounters an askable one. The user is prompted for a value for this proposition. After receiv- ing this information or discovering the user cannot pro- vide the information, the system proceeds to discover the next unasked, askable proposition of highest merit. Such a process is iterated until no more propositions remain with a merit greater than some cutoff value. The cutoff merit value is a user-defined parameter used to limit the number of questions asked. A cutoff value does not alter the order of questioning. The user may vary this value during the consultation process. Consultation continues only while a proposition whose merit value exceeds the cutoff can be found. Merit values are calculated for a small set of nodes with a common parent in each sprouting operation. These values are maintained in a tree of winners, and each newly calculated value is compared to the best values from all previously traversed nodes. The merit system thus allows an unconstrained network traversal that moves to a most meritorious node wherever it may be in the network. We recognize this traversal may be disconcerting to a naive user, but then exhaustive questioning may be tedious or even dangerous to a military commander with a time-critical task to execute. It is a tradeoff between time to question and the apparent completeness of the questioning. When it is more desirable to question the user thoroughly on a specific topic before moving on to the next issue the user should order a depth-first traver- sal. V Self-Merits How is a merit value determined? Two processes are involved: assignment of a self-merit value and calcu- lation of the edge-merits, The product of the edge- merits and the fInal self-merit along a path from the top proposition to a node provide the merit value of the node (see equation 3.2). Assignment of self-merit values to nodes is initially the responsibility of the network designer (domain expert). Large self-merit values should be assigned to nodes whose parameters are easily specified by a user 370 (low cost), and whose value is likely to change a great deal. Self-merits of unaskable nodes should reflect the expected change in the node’s associated value with respect to the cost of calculating that node’s value or expanding the traversal to its antecedents. Several sets of self-merits to describe accurately the benefit/cost ratio for examining various nodes by different subsets of users may be needed. It is irnpor- tant that self-merits be assigned reasonable values relative to each other in the initial implementation. After a consultant system has been running for a reason- able period of time, empirical data may yield self-merit values. Although the self-merits are generally assigned in an ad-hoc fashion, our experience has shown that it is beneficial to use precise mathematical formulas to com- plete the merit value calculation. Some edge-merit for- mulas are derived in the following sections. VI Logical Flmction Edge-Merits A consequent whose truth is contingent on verification of all its antecedents is the logical AND of those antecedents. In a general probabilistic approach, assuming all antecedents are independent, the AND function may be described as : P(H) = P(EJ P(E2) . * * P(E,) 6.1 The probability assigned to a consequent, H, given the current probabilities for each of its antecedents, Gj fa;.&- ’ ’ n), is the product of those antecedent pro- . Differentiating the consequent probability with respect to a single antecedent and substituting back from equation 6.1, the formula for AND-edge-merit is derived as : ma - Pm) aP(Ej) P(Ej) 6.2 Equation 6.2 depends on the values of only the antecedent/consequent pair of the edge in considera- tion. This is a most convenient form to express edge- merit calculations. An OR function is logically true when any of its antecedents are true. Again assuming the independence of antecedent values, the probabilistic OR function may be written as : P(H) = 1 -[1 -P(E,)] . . . [l -P(E,)] 6.3 The consequent probability is the complement of the product of the complements of all current antecedent probabilities. Differentiating with respect to an individual antecedent, and substituting back from equation 6.3, the OR-edge-merit is found to be : iii?gy=is& j i 6.4 It may be shown that both the AND-edge-merit and the OR-edge-merit approach finite limits as P(Ej) approaches 0 or 1. In a probabilistic scheme, the logical NOT may be defined as : P(H) = 1 - P(E) 6.5 Although not required for choosing among antecedents (since such an edge has but one antecedent), the NOT- edge-merit becomes important in networks with multiple conseouents of cropositions (see section 8). VII Subjective ayesian Edge-Merits Prmpector uses a subjective Bayesian method of assignment [l], relating each antecedent to its conse- quent as an independent piece of evidence. When the set of top propositions is mutually exclusive and exhaustive, the subjective Bayesian method is not practical [3]. In general, however, subjective Bayesian assignments pro- vide a useful method for the evaluation of evidence by an expert consultant system. A brief review of the subjec- tive Bayesian method as well as a derivation of the eee- merit for that assignment procedure is now presented. Assume that there exists a hypothesis, H, and n independent sources of evidence Ej (for i = 1, . . . , n) that may either support or deny the hypothesis. The hypothesis H is called the consequent of each E jv and each Ej is an antecedent of H. We suppose that for each antecedent Ej the current (probability) value P(Ej), the prior probability Po(Ej), and prior probabili- ties P(H 1 E j) and P(H ] Ej) of the consequent given the antecedent and its negation are known, and that the prior probability PO(H) of the consequent is known We summarize the procedure derived in [I] for calculating the current probability P(H) of the consequent. The probability estimator P j (H) of the consequent given the current value of Ej is calculated by linear interpolation between the known values. Pj(H) = PO(H) + Mj(P(Ej) - Po(Ej)), where 7.1 , PO(H) -PtHIEj) PdEj) if P(Ej)sPo(Ej)n Mj = ’ PO(H) -P(HIEj) 7.la P dEj)-1 if P(Ej)>Po(Ej). ‘ The probability estimators are combined more con- veniently by transforming them to odds estimators. The combined odds G(H) of the consequent is transformed to an expression of P(H) in terms of Pj (H) and P c(H). may The edge-merit for subjective Bayesian be expanded with the chain rule as assignment aP(H)= d P(H) aO(H) Ot Oj(H) aPj(H> aP(Ej) 7e2 d O(H) aOj(H) d Pj(H) aP(Ej) ’ The first and third factors of Equation 7.2 may be simplified by differentiating the odds equation (see [ ;1] for details). The second factor in the edge-merit expan- sion mav be found bv differentiation of the combined odds eq;ation [ 111. since all factors except Oj(H) are constant with respect to Gj(H). The final factor of the edge-merit expansion corresponds to the slope M j of the linear interpolation in Equation 7.1. Substituta these yields the edge-merit for subjective Bayesian assign- ment, 7.3 = (1 -P(H)) J'(H) 4 (1 - Pj (HII P j(H) Some boundary conditions in this formulation are notable. If Pi(H) approaches zero or one, the value of the edge-merit will approach a finite limit, although it is undefined at the limit points. In practice, a small offset of Pj(H) will simplify the calculation. Also, the slope -Vj is discontinuous at P(Ej)=Po(Ej). Currently, a value intermediate between the interpoiant slopes is used. VIII Multiple Consequents In an inference network individual nodes may have any number of consequents. Suppose that a top proposi- tion G has two antecedents, G i and Gz, and that both G i and Gz share a common antecedent, G’. as illustrated in figure 2. Assume that G’ is independently chosen as the 371 most meritorious antecedent of each of G 1 and Gs. The mtiple algorithm backs up merit values in a tree of winners, always selecting the maximum value. When the merit value backed up at GI is compared to the value backed up at G s, both nodes will possess the backed up merit of G’. It would be inaccurate to simply back up to G the maximum of the merit values backed up at G 1 and Gs since either choice represents the selection of proposi- tion G’. The value backed up to G should represent the combined itiuence of G’ on G. Since the effects of G’ through its parents might be synergistic or antagonistic, the sum of the signed merit values for G’ calculated independently through each of Gi and Gs should be backed up to G. When these values are of opposite signs, the antecedents of both G, and Gs must be reexamined. A sibling of G’ initially thought to have a lower merit value than G’ might back up a larger merit value to G. To see that adding the signed merit values is correct mathematically as well as intuitively, we may apply the chain rule for functions of several variables to see that the merit of node G’ is while the signed merit values of G’ as calculated through G1 and Gsare BP(G) aP(Gd aP(G’) md BP(G) BP@21 i?P(G’) BP(GJ aP(G’) K(G’) BP(G2) aP(G’) X(G’) * Thus to select a most meritorious node below G we form the set of most meritorious nodes below the antecedents Gj of G, and sum the merits of nodes that appear more than once. As before, we select the merit that is largest in absolute value to get a most meritorious node below G. Whenever a summation of signed merit values occurs, the merit values backed up the tree before the point of summation might not represent the most meri- torious propositions. One possible solution is to back up the K-best merit values at each level, in the hope that the most meritorious value is included among them. In our experience this has not been necessary. Even though-&& backs up only the single best merit value at each level, the merit calculation guides it to appropri- ate questions. IX Concluding Remarks Merit calculations may be performed for an infer- ence network whose assignment functions are of many different types. We have extended merit to handle both MjJcin and Praspector inference mechanisms, as described in reference [ll]. A node whose value is assigned by a simple logical function, probabalistic AND for example, might assign a value to its consequent through the subjective Bayesian method. At each edge the edge-merit function appropriate to that edge is applied. The units of the various edge-merit values all cancel, leaving a 6nal merit value expressed in the units used by the top proposition over cost. Differentiable, real-valued expert-defined assignment functions are easily incorporated into the inference network with the merit control strategy. We have compared merit with the J* algorithm used by Prospector by generating the values of two antecedents on a single node over a range of 26 probabil- ities. In no case did J* choose the most meritorious antecedent, see [6] for more detail. In a future consultant system, propositions and assignment functions supplied by an expert will be linked into an inference network. Commonly used assignment methods will be available as system defined functions. The expert, however, will be able to introduce new assignment functions wherever necessary. The system will derive the form of the edge-merit function for these expert-defined assignment functions. Merit values may be employed to order antecedents within a depth-first traversal of an inference network, or to guide a best-first strategy. The two-step 1$u..&pk algorithm for locating a most meritorious node was designed for implementation with large trees where an exhaustive search is not practical. In an inference net- work, however, an expert system might do an exhaustive merit analysis, examining each askable proposition on the network in search of the most appropriate one for investigation. Such a searching procedure requires more time to find a most meritorious proposition on the network, but it guarantees that consultation will focus on a most meritorious node. A reduction of inconsequential propositional values requested from the user will increase the effectiveness cf the consultation process, especially in time-critical applications such as the tasks faced by military com- manders. The Battle system uses merit values to direct such an intelligent consultation session. Since merit, a function of both the cost and potential benefits of considering a proposition, is easily calculated by a com- puter, introduction of the merit value heauristic should result in reduction of consultation time and effort Acknowledgement This work was sponsored by the Office of Naval Research. References Dl Duda, R.O., Hart, P.E., and Nilsson, N.J. Subjective Methods for Rule-based Inference Systems. Proc. National Computer Conference 45, AFIPS Press, pgs. 1075-1082. Artificial Intelligence Center, SRI Int., Menlo Park, CA., 1976. Duda, R. O., Hart, P.E., Konolige. K., and Reboh, R., A Computer-based Consultant for S’ineral Exploration. Artificial Intelligence Center, SRI Int., Menlo Park, CA., (Sept. 1979). Pednault, E.P.D., Zucker, SW., and Kuresan, L.V., On the Independence Assumption Underlying Subjective Updating. (May, 1981), pgs. 213-222. Artificial Intelligence 16 Shortliffe, E.H. Computer-based Kedical Consulta- tions: mycin, American Elsevier, New York, : 976. Slagle, J.R., Cantone, R., and Halpern, E. Battle : An Expert Decision Aid for Fire Support Command and Control. NRL Memorandum Report 4847. (July 8, 1982). PI [31 [41 El [61 PI PI Slagle, J.R., and Halpern, E. An Intelligent Control Strategy for Computer Consultation. SRL AVemoran- dum Report 4789, (April 8, 1982). Slagle, J.R. and Farell, C.D. Experiments in Automatic Learning for a Multipurpose Heuristic Program. Comm. of the ACM, 1971). pgs. 91-99. 14, (February, Weiss, S.A., Kulikowski, C.A.. Amarel, S. and Safer, A. A Model-based Method for Computer- aided Medical Decision Making. Artificial Intelli- gence 11, (August, 1978), pgs. 145-172. 372
1983
12
203
A RULE-BASED APPROACH TO INFORMATION SONE RESULTS AND COMMENTS BETBIEVAL: ABSTRACT Richard M. Tong, Daniel G. Shapiro, Brian P. McCune and Jeffrey S. Dean Advanced Information & Decision Systems 201 San Antonio Circle, Suite 286 Mountain View, CA 94040, USA. This paper is a report of our early efforts to use a rule-based approach in the information retrieval task. We have developed a prototype system that allows the user to specify his or her retrieval concept as a hierarchy of sub-concepts which are then implemented as a set of production rules. The paper contains a brief description of the system and some of the preliminary testing we have done. In particular, we make some observations on the need for an appropriate language for expressing conceptual queries, and on the interactions between rule formulation and uncertainty representation. I THE INFORMATION RETRIEVAL PROBLEM Existing approaches to textual information retrieval suffer from problems of precision and recall, understandability, and scope of applicability. Boolean keyword retrieval systems (such as Lockheed’s DIALOG) operate at a lexica 1 level, and hence ignore much of the available information that is syntactic, semantic, or contextual. The underlying reasoning behind the responses of statistical retrieval systems [41 is difficult to explain to a user in an understandable and intuitive way, and systems that rely on a semantic understanding [ 51 must severely restrict the style and content of the natural language in the documents. In the near future, large on- line document repositories will be made available via computer networks to relatively naive computer users. In this context, it is important that future retrieval systems possess the following attributes: (1) Queries should be posed at the user’s own conceptual level, using his or her vocabulary of concepts, and without requiring camp lex programming . (2) The system should be able to provide partial matching of queries to documents, thereby acknowledging the inherent imprecision in the concept of a relevant document. (3) The number of documents retri eved should be dependent upon the needs of the user (e.g., (4) (5) (6) uses for the reading them) document s, time constraints on A logical, understandable, and intuitive explanation of why each document was retrieved should be available. The user should be able to easily experiment with and revise the conceptual queries, in order to handle changing interests or disagreement with previous system performance. Conceptual queries should be easily stored for periodic use by their author and for sharing with other users. II A RULE-BASED APPROACH In our efforts to address the issues raised above, we have created a prototype knowledge-based information retrieval system called RUBRIC (Bllle Based Retrieval of Information by Computer), in which queries are represented as a set of logical production rules [2l. The rules define a hierarchy of retrieval topics (or concepts) and subtopics. By naming a single topic, the user automatically invokes a goal oriented search of the tree defined by all of the subtopics that are used to define that topic. (i.e., a search process similar to that used in MYCIN [71). The lowest-level subtopics are defined in terms of pattern expressions in a Text Reference Language, which allows keywords, positional contexts, and simple syntactic and semantic notions. The context functions restrict the pattern matching to occur in some specified syntactic context. So for example, one can specify that two patterns are of interest only if they occur in the same sentence or paragraph. Contexts can be made “fuzzy”, giving RUBRIC the ability to find patterns that are “almost” within the same sentence or paragraph. Our current implementation supports a variety of features including a simple explanation facility, variable thresholding and clustering of documents, one-level thesauri, and stem extraction on stories and queries. A. A Novel Rule Format As in most other rule based systems, each rule in the query definition may have a user-defined 411 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. selection of thirty stories taken from the Reuters News Servic e. Our basic experimental procedure is heuristic weight which represents the degree to which the occurrence of the antecedent supports the occurrence of the consequent. That is the user can write rules of the form: IF “the story is about topic A” THEN “there is evidence to degree 01 that it is also about topic B” However, in contrast to other systems we also provide an extended rule format, which enables the user to incorporate auxiliary (or contextual) evidence into the query. Auxiliary evidence is evidence that by it self neither confirms or disconfirms our hypothesis, but which may decrease (or increase) our belief if seen in conjunction with some primary evidence. The syntax of such a rule is: if A then C to degree Q but if also B then C to degree 0 where if OL is greater than p then B is disconfirming auxiliary evidence, and if (Y is less then p then B is confirming auxiliary evidence. This has the effect of interpolating between cr and /3 depending upon the truth of the auxiliary clause B. Thus we might have a rule of the kind: IF “the story contains the literal string ‘bomb”’ THEN “it is about an explosive device with degree 0.6” BUT IF “it also mentions a boxing match” THEN “reduce the strength of the conclusion to 0.3” Here we see the concept of disconf irming evidence in operation; notice that by itself “being about a boxing match” is not evidence that can be used to support or deny the conclusion we are trying to establish. (c.f., MYCIN which uses a concept of directly disconfirming evidence). III SOME EXPERIMENTS A methodological advantage of working with the information retrieval problem is that we always know independently of RUBRIC whether or not the stories in the database are of interest. This makes it possible to conduct a variety of interesting experiments. We report on just two that we performed as a preliminary investigation of the validity of the RUBRIC model of information retrieval. First, we look at the improvements that can be achieved over a conventional Boolean keyword approach, then second, we explore the effects of using different calculi for propagating the uncertainty values within the system. A. Exper imenta 1 Method As an experimental database for testing the retrieval properties of RUBRIC we have used a thus to rate the stories in the database by inspection (i.e., define a subjective ground truth), define a query, apply the query to the database, and then compare the rating produced by RUBRIC with the a_ priori rating. RUBRIC’s basic task is to assign a weight to each story in the database. This weight is the truth of the statement “this story is - relevant to the query”, with its value being determined by propagating the uncertainty values through the tree defined by the rule-based query. This makes the assessment of performance somewhat complicated, since we are interested in the properties of the ordering, both in absolute terms (i.e., the truth values returned) and with reference to the ordering that we determined beforehand. For the purposes of this discussion, however, can concentrate on two basic measures. Both of we these are based on the idea of using a selection threshold to partition the ordered stories so that 1 those above it are “relevant” and the first we those below it lower the those deemed a are “irrelevant”. In threshold until priori relevant unwanted stories we include all and then count the number of that are also selected (denoted In the second we raise the threshold until we exclude all irrelevant stories, and then count the number of relevant ones that are not selected (denoted NM). The first definition therefore gives us an insight into the system’s ability to reject unwanted stories (precision), whereas second gives us insight into the system’s ability to select relevant stories (recall). B. A_ Comparison with Boolean Retrieval ---- First we selected as a retrieval concept “Violent acts of terrorism”, and then constructed an appropriate rule-based query. This is summarized in Figure 1 where we make extensive use of our extended rule format (indicated in the figure by the use of “Modifier” sub-trees). Application of this query to the story database results in the story profile shown in Figure 2. Notice that for presentation purposes the stories are ordered such that those determined to be 3 priori relevant are to the left of the figure and are further subdivided into definitely relevant and marginally relevant. In this case the ground truth defines nine definitely relevant stories, four marginally relevant ones, and seventeen that are not relevant. In Figure 2 each story rating assigned by RUBRIC is represented by a number in the interval [O,ll. A perfect profile would be one that gave the relevant stories a high rating, the margina 1 ones an intermediate rating, and the non- relevant stories a low rating. The performance scores for this output are: Precision: NF = 1 when we ensure that NM=O, and Recall: NM = 5 when we ensure that NF=O This is excellent performance, being marred only by To compare RUBRIC against a more conventional the selection of story (25) which, although it approach we constructed two Boolean queries using contains many of the elements of a terrorist the rule-based paradigm, one of which is shown article, is actually a description of an Figure 3 as an AND/OR tree of sub-concepts. The unsuccessful bomb disposal attempt. The lowest only difference between the two Boolean queries is rated relevant story, (26), is one about the that in the first we insist on the conjunction of kidnapping and shooting of a minor political figure ACTOR and TERRORIST-EVENT (as shown), whereas in in Guatemala. In our ground truth this was given a the second we require the disjunction of these marginal rating. concepts. TERRORISM _ Modifier 1.0 REASON REVOLUTION SENTENCE (OPPOSITION, GOVERNMENT) TERRORIST-EVENT -: Modifier Iso ASSASSINATION ! -5 .8 SENTENCE (KILLING. POLITICIAN) AcT;ON _ Modifier 1.0 ACTOR SPECIFIC-ACTOR GENERAL-ACTOR I VIOLENT-EVENT : Modifier 1.0 VIOLENT-EFFECT .8 "DEAD" “DEATH" "DEBRIS" VIOLENT-ACT ENCOUNTER TAKEOVER BOMBING KIDNAPPING KILLING Figure 1. Rule Structure for "Acts of Terrorism" -Belevaat Stories --Non-relevent Stories - Figure 2. Story Profile from RUBRIC When we compare the performance of these simulated Boolean queries to the query defined in the extended RUBRIC language we find that the conjunctive form of the Boolean query misses five relevant stories and selects one unimportant story; whereas the disjunctive form selects all the relevant stories, but at the cost of also selecting seven of the irrelevant ones. While these results represent only a preliminary test, we believe they indicate that the RUBRIC approach allows the user to be more flexible in the specification of his or her query, thereby increasing both precision and recall. A traditional Boolean query tends either to over- constrain or under-constrain the search procedure, giving poor recall or poor precision. We feel that, given equal amounts of effort, RUBRIC allows better models of human retrieval judgement than can be achieved with traditional Boolean mechanisms. C. An Experiment with Uncertainty Calculi - Within the literature of expert systems, there has been a debate on the choice of “correct” calculus to represent and manipulate the uncertainty values. Indeed, there have been severa 1 attempts to construct a “calculus of uncertainty”, some based on the concepts of probability and others on the more general formalisms of mathematical logic (see [61 and 191 for an introduction to some of these). In an attempt to clarify some of these issues, we have conducted a series of experiments in which we have adopted the view that the uncertainty values should be interpreted as representing the partial truth of the associated proposition. That being the case, we can use the formalism of multi-valued logic to define our calculus. Such logics have been studied extensively (see for example t.311, and lend themselves to efficient representation within the RUBRIC framework. Our experiments consisted of fixing the query (“Acts of terrorism” as before) and changing the uncertainty calculus. There are, of course, a very large number of calculi, but we have concentrated on those in which the AND and OR connectives can be modelled as triangular norms and trianpular co- norms respectively Lll. Prototypical examples Le ‘min-max”, viz: v(A a& B) = min [v(A),v(B) 1 v(A or B) = max [v(A),v(B) 1 and “pseudo-bayesian”, viz: v(A and B) = v(A) .v(B) v(A m B) = v(A) + v(B) - v(A).v(B) 413 TERRORISM DEVICE EXPLOSION Figure 3. AND/OR Concept Tree for Boolean Query We also selected a limited number of detachment operators (i.e., the operators used to compute the truth value of the consequent, v(B), given the truth value of the rule, v(A=>B), and the truth value of the antecedent, v(A)) of which a prototypical example is “product”. In all, we tested twenty different calculi and found marked variations in the NF and NM scores. Some calculi gave good performance whichever measure we used, some performed well with one but badly on the other, and others did poorly with both. Interestingly, no one calculus seemed to be significantly better than any of the others. We were, however, able to select four that as a group seemed to out-perform the others, and of these, three were based on the prototypical connectives mentioned above. (See [81 for more details.) Our initial conclusion from this experiment is that a change in calculus can indeed have a marked effect on the interpretation of a query. However, in our search for an understanding of why some calculi did better than others we became aware of the fact that there is an inherent interdependency between rule writing and the uncertainty mechanisms. Thus although some calculi appear to be superior, this may not be because they are better in any absolute sense, but simply because we happened to construct a query that favored them. That is, if in developing a query we unconsciously expect the uncertainty values to behave in certain ways and write rules to exploit this, it would not be surprising to find that the calculus that most closely matched our expectations performed the best. This implies that we need to be more subtle in our investigation, and need to look more deeply into the role that the calculus plays. We address some of these issues in the next section. IV KNOWLEDGE AND UNCERTAINTY Our experience with the development of RUBRIC has given us some valuable insights into the information retrieval process. In particular, we have gained a deeper understanding of the nature of both the knowledge required to describe a query, and the meaning of uncertainty in this context. To introduce our comments, let us first observe that RUBRIC is not a story understanding system. Rather, it is a system which allows the user to construct a prototypical concept structure for the retrieval topic, and then returns a value that is a measure of the degree to which the story under consideration matches this structure. To make RUBRIC an effective tool we need to provide the user with a set of appropriate query building constructs. We have mentioned the Text Reference Language and our extension to the standard IF.. .TEEN.. . rule, but these are not sufficient and we need to consider others. Some of these will be determined by the nature of the particular application, but we believe that it is possible to develop a small set of primitives from which the user can build more elaborate forms. In our attempts to build experimental queries, we have been struck by the concept of “evidence” and the fact that it is used in several paradigmatic ways. The notion of “auxiliary- evidence” motivated our extended rule format; but we can identify some others . For example, the concept of “weight-of-evidence” applies when we want to express the notion that no single piece of evidence allows us to deduce the occurrence of a topic to any significant degree, but if we have several such pieces then we would like their effect to be cumulative. We can also conceive of a situation in which we need a “cases-of-evidence” construct. That is, we want to use the best of several alternative lines of reasoning, even though each individual path might provide a good indication of the relevance of the story. Yet another form would be “direct ly-disconf irming- evidence”, the occurrence of which reduces our belief in the relevance of a particular story at a global level. Clearly, the existence of such canonical evidence schemata has implications for the choice of uncertainty calculus that we use. For example, the min-max calculus cannot be used to model the weigh t-of-evidence form, and the pseudo- probabilistic calculus cannot model the cases-of- evidence form. Similarly, none of the calculi we explored in our second experiment have any direct mechanism for modelling absolute disconfirmation. This leads us to conclude that the choice of calculus is not a decision to be made independently of rule writing, and indeed, it seems to us that we probably need to allow several calculi to co-exist within a given query. An issue that quickly becomes apparent as more elaborate queries are constructed is the semantics of the relevance values. For example, all sea lar- valued ca lcul i, except min-max, have the property that very long chains of reasoning (i.e., in queries that have many levels in the hierarchy of sub-concepts) lead to relevance values which are very small. This is somewhat counter-intuitive; the user who constructs a very complex query to model his or her retrieval concept will in general get lower relevance values than those obtained from something more primitive. While this effect is a direct consequence of allowing non-Boolean reasoning, a user faced with story ratings the maximum of which is 0.2, say, will undoubtedly feel rather disconcerted. One way to overcome this is by normalization, two forms of which we might consider. First, we could normalize by scaling the retrieved stories so that the most relevant is assigned the value 1.0. This is merely a presentation device (in fact one we used in our first experiment) and is appropriate when the user is only concerned with the relative evaluations of sets of stories. Alternatively, we could compute the maximum value that any story could receive (i.e., the value obtained by setting all the terminal values of the query to 1.0) and divide the actual story values by it. This is more fundamental, since it amounts to a redefinition of the meaning of the uncertainty values attached to the rules. Finally, we have observed that as queries become more complex there seems to be a reduced sensitivity to both the absolute value of the heuristic weight assigned to rules and to the choice of uncertainty calculus. This suggests that we might not need the precision provided a numerical representation scheme, and we conjecture that a form of symbolic uncertainty calculus may be the way to proceed. Thus rather than evaluate the strength of support that some evidence gives to a hypothesis as 0.8, say, it is more natural, and appropriate, to label it by something like “very strong”. We conjecture that as we develop a more expressive language for query construction, there will be decreasing emphasis on uncertainty as a numerical adjunct to rule writing, and a realization that knowledge about uncertainty enters directly into the expression of retrieval concepts. V- FUTURE DIRECTIONS In the future we plan to conduct additional experiments to investigate the effectiveness of providing a richer language for writing queries. In particular, we will be looking at the impact of the cases-of-evidence and weight-of-evidence rule forms discussed in the previous section in terms of both their expressiveness and usability within a text reference language. As part of this effort, we will examine the effect of allowing several uncertainty calculi within the same rule-base, playing particular attention to the rule to rule interface questions that will arise. Finally, we will investigate the feasibility of using symbolic rather than numeric representations of uncertainty. REFERENCES [ll [21 [31 [41 [51 [61 [71 t 81 [91 Dubois, D. and H. Prade, “A Class of Fuzzy Measures based on Triangular Norms”, Int. J_. General Systems 8 (1982) 43-61. McCune, B.P., J.S. Dean, R.M. Tong and D.G. Shapiro, “RUBRIC: A System for Rule-Based Information Retrieval, ” Fina 1 Techni ca 1 Report, TR-101 8-1, Advanced Information 6. Decision Systems, Mtn. View, CA., 1983. Rescher, N., Many Valued Logic. McGraw-Hill, New York, 1969. Salton, G., M.J. McGill, Introduction to Modern Information Retrieval, McGraw-Hill, New York, 1983. Schank, R .C. and G. DeJong , “Purposive Understanding”, In J .E.Hayes, D.Michie and L.I.Mikulich teds.), Machine Intelligence 1979, chpt. 24. Shafer, G., A_ Mathematical Theory _ of Evidence. Princeton Univ. Press, 1976. Shortliffe, E.H., Computer Based Medica 1 Consultations: MYCIN. American Elsevier Publishing Co. Inc., 1976. Tong, R.M. and D.G. Shapiro, “An Experiment with Multiple-Valued Logics in an Expert System”, In Proc. IFAC Information, Knowledge Sm. on -2y Representation and Decision Analysis. Marseille, July, 1983. Zadeh, L.A., “Approximate Reasoning Based on Fuzzy Logic”, In Proc. IJCAI-79. Tokyo, August, 1979, pp. 1004-1010. 415
1983
13
204
IMPULSE: A Display Oriented Editor for STROBE Eric Schoen Reid G. Smith Schlumberger-Doll Research Old Quarry Road Ridgefield, Connecticut 06877 ABSTRACT In this paper, we discuss a display-oriented editor to aid in the construction of knowledge-based systems. We also report on our experiences concerning the utility of the editor. 1. Introduction There is by now wide experience with construction of knowledge-based systems. Several authors have emphasized the importance of powerful tools for creation, modification, and maintenance of knowledge bases and related code (e.g., [Buchanan, 19821). IMPULSE, a display oriented knowledge base editor, is one such tool. It provides a convenient user interface to the STROBE [Smith, 19831 structured object programming system running in Interlisp-D on the Xerox 1100 series scientific workstations. In designing and implementing IMPULSE, concerned with meeting the following goals: we were Taking full advantage of the underlying knowledge representation language. Providing flexible tools for visualizing the potentially complex and varied hierarchical structurings of a knowledge base. Making full use of Interlisp-D’s graphical facilities (windows, menus, bitmaps, data inspectors, etc.) to present information and commands in a form which both experts and novices can use comfortably, and to minimize keystrokes and other repetitive or unnecessary user operations. Preserving context while editing several parts of a knowledge base or knowledge bases simultaneously. Preventing the user from causing unintentional damage to a knowledge base. Furnishing tools for managing and editing multiple knowledge bases. A Brief Overview of STROBE STROBE is designed to be a data structuring and control tool built on top of Interlisp. Unlike LOOPS [Bobrow, 19821, which provides a syntax to suppress Lisp, STROBE supplies only a kernel of Lisp functions to manage inheritance, message-passing, and knowledge base construction for the Lisp programmer. The data-structuring components of STROBE are knowledge bases (KB’s), objects, slots and facets.’ Its inheritance mechanism, implemented at the facet level, supports multiple hierarchies. In addition to message- passing, STROBE allows procedure activation in conjunction with several types of data access and alteration (similar in concept to the Interlisp Advise facility). 3. The IMPULSE User Interface The naked STROBE interface is intended for program-level interaction with knowledge bases. We felt that IMPULSE needed to provide a rich set of support functions to simplify the task of the knowledge base builder. While several character-oriented knowledge base editors are available (e.g., UNITS [Smith, 19801), a serious limitation of these editors is their inability to display multiple editing contexts concurrently. A display oriented editor allows the builder/maintainer to edit simultaneously in as many windows as can be displayed. IMPULSE is implemented as four distinct levels: a top-level knowledge base manager, a knowledge base editor, an object/slot editor, and a facet editor. Each level consists of an editor window and one or more associated command menus. Any number of editor windows may be open simultaneously. The knowledge base manager deals with KB’s as indivisible entities. The manager presents a menu of loaded knowledge bases, which allows the user to select a specific knowledge base to be edited, stored, or copied. Additional commands in the associated menu allow new knowledge bases to be loaded, or created. The Settings command permits the user to modify the values of several STROBE global flags. The knowledge base editor allows the user to modify the structure of an individual knowledge base. The information window provides an overview of the KB. One associated menu provides a set of KB-oriented commands: creating and editing objects, deleting the knowledge base, etc. The other menu is a list of objects in the KB. The user selects an entry from the object menu, and then selects a command to perform an operation on the selected object. 1. A knowledge base is made up of a number of interrelated objects. The objects encode packets of knowledge. The characteristics of an object and its links to other objects are encoded as a number of slots. The slots themselves have structure--facets--that can be used for annotatation. 356 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Object and slot edit commands are issued from command menus associated with the object/slot editor. Its window contains a structured printout of an object’s contents. The slot name captions in the bold font are active regions sensitive to picks with the mouse. Picking a slot caption inverts it and sets that slot as current for slot editing commands. The IMPULSE facet editor provides access to the contents of each facet in a slot. Facets are the least structured components in STROBE, and thus the data structures most like basic Interlisp datatypes. For this reason, the IMPULSE facet editor serves mainly as a link to the Interlisp-D data inspector [Burton, 19821. One additional major component has proved very useful. Using the Interlisp-D Grapher package [van Lehn, 19821, tree and graph hierarchies can be drawn to aid in visualizing KB structure. The Ancestry and Progeny commands in the object/slot editor draw generalization and specialization trees. The Display Slot Succession command traces a tree of objects containing the selected slot name and pointed to by those slots. The nodes of these trees are object names; selecting a node starts an object/slot editor on the corresponding object. The KB Struct. Graphs command allows a user to supply arbitrary tree generating functions (e.g., to display object relationships in parts hierarchies); the system associates these functions with the knowledge base for menu selection in later sessions. Figure 1 shows IMPULSE running on the Xerox 1100. The knowledge base manager (the small window at the upper right entitled “STROBE”) indicates two loaded knowledge bases. The Dipmeter Advisor KB is selected, and its corresponding KB editor appears on the left side of the screen. The graph at the lower left side of the screen is a tree of the Tectonic-Feature object’s progeny. Bold face nodes are class objects; normal face nodes are individuals. Within Dipmeter Advisor, the Fault object has been selected for editing; its edit window appears at the lower right. The Picture slot of the Fault object is currently selected in the object editor window, and is expanded in the facet editor on the left. Note that the object/slot editor view of Fault is the bitmap, while the facet editor view is its Interlisp print name. The Lute-Fault object is displayed in an edit window above the Fault object. Slot names followed by uparrows (pointers) are slots whose values are obtained by STROBE via inheritance. IMPULSE provides two means to aid the user in managing and conserving screen space. Any IMPULSE window can be shrunk to its icon (such as the two small windows above the KB manager in Figure 1). Also, IMPULSE will not create a menu of object names which extends below the bottom of the window with which it is associated; instead, large object menus are limited in size and made scrollable. 4. User Assistance in IMPULSE Work of this type would ordinarily involve a tremendous amount of (error-prone) typing. For the IMPULSE user, the need for typing has been vastly reduced by an extensive set of menu commands. For those operations which absolutely require typein, we have supplied “smart” typein routines with command completion, partial name recognition, and spelling correction. For instance, when creating a new object, the user needs to supply a list of parent objects, whose names need be known only partially.2 2. The typein routines rely on the Interlisp TTYIN package [van Melle, 19821. Figure 1. Stratigraphic Analysis Interaction 357 We felt it necessary to keep the user interface simple, and therefore reduced the size of command menus as much as possible. As a result, a number of menu entries have subcommands: when picked with the left mouse button, they cause the indicated command to be executed; when picked with the middle button, they bring up a submenu. For instance, the Create Object command normally prompts the user for the information necessary to create a STROBE object. Its subcommand menu gives the choice of creating an object of prespecified type-class, individual, or description-thus avoiding that question in the object creation dialogue. Other menu entries have more powerful subcommands: the Ancestry command has subcommands to allow interactive editing of an object’s generalizations slot. Another important feature is flexibility in selecting an object for editing. In IMPULSE, there are three such mechanisms: 0 From the KB editor menu; 0 From nodes in Grapher trees, o From the direct progeny/ancestry menus associated with object/slot editors. We have provided an on-line help mechanism for all editor levels. Each editor window title bar is sensitive to mouse button selections. Whenever the user buttons in this region, a pertinent help message is displayed in the IMPULSE help window, a scrollable window whose size and screen position is under user control. Finally, IMPULSE tries to keep the user from causing inadvertant damage to a knowledge base. Requests to delete facets, slots, objects, or knowledge bases must be confirmed. When performing STROBE functions whose behavior is dependent upon the settings of argument flags (e.g. RENAMEOBJECT and RENAMESLOT), IMPULSE queries the user for the desired behavior. 5. Conclusion We believe we have met the goals we set for IMPULSE. The editor has proved very simple to use. Although the user community is on the order of a dozen computer scientists, no user manual has been needed; instead, comprehensive on-line documentation guides the novice user. Interaction with the editor is rapid: keyboard input is required only where unavoidable, and is always backed up with spelling correction and partial name recognition. IMPULSE has now been used to build several large knowledge bases (containing up to 700 objects). In each case, its abilities to display the structure of the knowledge base in a variety of ways, and to provide flexible access to knowledge base components have greatly reduced the magnitude of the task. In addition to their utility to the builder/maintainer of knowledge bases, editors like IMPULSE can assist in the transfer of expertise from a domain expert to a program. This currently involves a computer scientist intermediary (knowledge engineer). One of the most useful roles played by the intermediary is to help provide a logical organization for the knowledge of the domain expert. This assistance is typically provided via many interactions. For each interaction, the intermediary gathers some understanding of a portion of the expert’s knowledge, encodes it in a program, discusses the encoding and the results of its application with the expert, and refines the encoded knowledge. Discussion and refinement is facilitated when the knowledge is encoded in domain-specific terms, and when it is presented in forms familiar to the domain expert. Our early experience with IMPULSE is that its ability to simultaneously display different views of a knowledge base and its characteristic immediate feedback have enhanced interactions with our domain experts. Acknowledgements We greatly appreciate the contributions of Dave Barstow, Stephen Smoliar, Gilles Lafue, Stan Vestal, Tony Passera, and Scott Marks. Their valuable suggestions made in the course of testing early versions of IMPULSE guided much of our development effort. This paper has benefited from comments by Barstow and Brad Cox. REFERENCES D. G. Bobrow and M. J. Stefik, A Virtual Machine for Experiments in Knowledge Representation. Unpublished Memorandum, Xerox Palo Alto Research Center, April 1982. B. G. Buchanan, New Research On Expert Systems. In J. E. Hayes, D. Michie, and Y-H Pao (Eds.), Machine Intelligence 10. New York: Wiley & Sons, 1982, pp. 269-299. R. R. Burton, The Data Inspector. Interlisp-D Documentation Series, Xerox Palo Alto Research Center, March, 1982. R. G. Smith, STROBE: Support for Structured Object Knowledge Representation. Proceedings of the Seventh International Joint Conference on Artificial Intelligence, August, 1983. R. G. Smith and P. E. Friedland, Unit Package User’s Guide. DREA Technical Memorandum 80/L, December 1980. Also published as HPP-80-28, Heuristic Programming Project, Stanford University.) W. Teitelman, Znterlisp Reference Manual. Xerox Palo Alto Research Center, October 1978. K. van Lehn, Grupher Documentation. Interlisp-D Documentation Series, Xerox Palo Alto Research Center, September 1982. W. van Melle, TTYZiV-A Display Typein Editor. Interlisp-D Documentation Series, Xerox Palo Alto Research Center, June, 1982. 356
1983
14
205
I YAPS : A PRODUCTION RULE SYSTEM MEETS OBJECTS* ABSTRACT Elizabeth Allen University of Maryland This paper describes an antecedent-driven pro- duction system, YAPS (Yet Another Production Sys- tem) which encodes the left hand sides of produc- tion rules into a discrimination net in a manner similar to that used by Forgy ([Forgy 811, [Forgy 791) in OPS5. YAPS, however, gives the user more flexibility in the structure of facts in the data- base, the kinds of tests that can appear on the left hand side of production rules and the actions that can appear on the right hand side of the rules. This flexibility is realized without sac- rificing the efficiency gained by OPS5 through its discrimination net implementation. The paper also discusses how YAPS can be used in conjunction with object oriented programming systems to yield a sys- tem in which rules can talk about objects and objects can have daemons attached to them. It discusses methods of dividing YAPS into independent rule sets sharing global facts. basic cycle when facts in the data base are matcnea against the left hand sides of productions rules to determine those rules ready to fire. This problem was addressed by Forgy [Forgy 791 in his thesis describing OPS. He observed that during a produc- tion rule cycle, only a few facts are added to or removed from the data base and, consequently, a production system could be much more efficient if it remembered between cycles what facts matched the patterns in the left hand sides of the production rules. Then, whenever a fact was added or deleted, matches would be made or deleted and rules which had been completly matched would be the rules ready to fire. To compare new facts against the set of production rules, he encoded left hand side pat- terns in a discrimination net. This method of ---------- * Funding for this project was provided by the Goddard Space Flight Center in Greenbelt, Maryland. saving matches between cycles of the production system cut out much of the overhead and allowed larger rule systems to run without needing to swap rules in and out of active use as many expert sys- tems currently do. However, Forgy's OPS5 production system [Forgy 811 has some drawbacks which are fairly serious from a user's point of view. They are: (1) (2) (3) (4) (5) (6) Facts in the database are restricted to flat lists of atoms and numbers; nested sublists are not supported. This restricts facts to having only one arbitrarily long field of parameters as well as preventing a user from structuring his facts conveniently. Tests that appear on the left hand side of a production rule can only use equality, ine- quality and arithmetic comparisons involving no more than two variables. Right hand side actions are restricted to a set of actions specified by OPS5, and though these actions cover some of the things a user might want to do, they do not allow a user to write arbitrary lisp bodies. This is an unwanted and unnecessary restriction. The syntax of OPS5 is difficult to deal with. Often, it is not at all obvious how to inter- pret the patterns on the left hand sides of production rules. While the syntax problem is not crucial to running a production system, it can be a problem when writing production rules and when OPS5. reading production rules written in Right hand sides of production rules are always interpreted by the OPS interpreter. There is no way to gain the speed up of com- piling the rules. OPS5 is hard to run under program control. It is designed mainly to be used as a top level controller of a system of just production rules. This paper describes the production system YAPS (Yet Another Production System) which is designed to allow greater flexibility and readabil- ity of production rules while not giving up the efficiency gained by OPS5. YAPS has none of the above restrictions and has a clear, straightforward syntax making productions much easier to read and modify. YAPS may be run conveniently under program control and can maintain and run multiple rule sets and data bases. This makes YAPS a much more gen- 5 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. era1 tool. 2. A User's View of YAPS - ----_- Facts in YAPS may be arbitrarily nested lisp lists of atoms and integers. Patterns may contain variables which match any constant lisp expression within a fact. Variables are atoms whose first character is a hyphen (-1; a hyphen appearing alone will match YAPS is: anything. A sample production rule in location monkey -xl " .' reach monkey -reach) box -box -boxsize) location -box -x) size monkey -monke size) >= -reach -boxsize 3 <= -hei h;)(+ -monkeysize -boxsize)) remove 7 fact reach monkey "(+ -monkeysize 1 -boxsize)) The keyword test separates left hand side patterns from the left hand side tests. Note that the second test referneces three variables. Patterns may also be used which specify that particular facts may not be in the database are also allowed. For example, (p find-largest (data -x) (" (data -y) with (> -y -x1> --> (let ((-ans (calculation -xl)> (remove 1) (fact calculate -x -ans> (fact data -ans)> This rule guarantees that when it runs, the largest data in the data base will be bound to 'I-x". The keyword with separates the list of not patterns from the tests associated with them in the not clause. An arbitrary number of not clauses and tests may appear on the left hand side of a produc- tion rule. The right hand side, as can be seen in this rule, can contain arbitrary lisp bodies. In addition, YAPS makes it easy to run produc- tion systems under program control. Whenever a goal fact (in the form of a fact whose car is trgoallt) is added to the YAPS data base and thepro- duction rules are not already running, the produc- tion system automatically runs until all goals are removed from the system or until there are no more productions ready to fire. In addition, if there are outstanding goals in the data base and a fact is added which allows one of the rules to fire, the system is run. This gives YAPS the desired daemon behavior. YAPS also supports multiple rule sets and data bases which can be entered and exited by the con- trolling system. Any fact asserted while within a given data base will be asserted in that data base. Facts can also be asserted in a global data base causing the fact to be asserted into all the YAPS data bases and rule sets. manua 1 For more information [Allen 821.) on YAPS, see the YAPS 20 Implementation of YAPS -- YAPS is implemented in Franz Lisp [Foderaro 801 running under Berkeley UNIX* [Joy et al 813 and using the University of Maryland flavors package ([Wood 821, [Allen et al 821). The most important structure in YAPS is the discrimination net which encodes left hand sides of production rules in the system. When facts are added to the data base, they are fed into the top of the discrimination net where they are compared against patterns appearing on the left hand sides of the production rules. Each node in the discrimination net has a path (say "carfl or ttcadrlt) specifing a position in the fact and an associative list of expected values and child nodes. When a fact matches all the constants of a pattern, it is unified with the pattern, and a binding is generated. All partial bindings are compared against other partial bindings for pat- terns in the same production rule, and new bindings are generated whenever there is a match. Like OPS5, left hand side tests are performed as soon as a potential binding has values for all the vari- ables in the test, and a binding is only made if the test succeeds. Thus, the tests are performed as early as possible and false partial bindings are pruned early. Bindings which completely match the left hand side of some production rule are placed in the conflict set and, according to the conflict resolution algorithm, one is chosen. When facts are removed from the data base, all the bindings in which they appear are removed. This is done by associating with each fact the list of bindings in which it appears and by mapping down the list removing bindings as the fact is removed. This differs from OPS5. In OPS5, facts are recom- pared against the discrimination net upon removal to find bindings in which they appear. When a production is added to YAPS, a function is defined whose arguments are the left hand side variables and whose bodies are the right hand side bodies. Thus, left hand side variables are just local variables in the right hand side function. When a production rule is run, this function is applied to the list of values of the left hand side variables. These functions may be compiled if the file containing the YAPS productions is compiled. This speeds up the lisp code both by allowing the right hand sides to be compiled and by having mac- ros such as fact expanded at compile time. OPS5 does not define such a function for the right hand side of a production forcing the right hand sides of rules to always be interpreted. 4. Production Systems and Flavor Objects --- Object oriented programming in Artificial Intelligence using such systems as MIT's Lisp Machine Flavors [Weinreb & Moon 811 has become popular recently and with good reason. Steps taken to merge production systems with object oriented programming can yield quite useful systems in which facts and productions manipulate objects by viewing them as atomic entities. At the same time, daemons in the form of production rules can be attached to ---------- * UNIX is a trademark of Bell Laboratories objects and can run when certain messages are sent to the object. These objects can have their own individual rule sets and data bases but with the ability to add specific facts considered global information for the composite data base. YAPS also provides a flavor, the daemon-mix-in --- flavor, which can be mixed into other flavors giv- ing objects of those flavor pointers to the desired YAPS rule set and data base. The daemon-mix-in flavor defines messages like "buildp", tIgoallt and "fact" which manipulate its rule set and data base. Then, when a goal message is sent to an object, the desired goal it asserted into its data base and any production rules thus enabled are fired. As the rules fire, they may add more goals to either its own data base or to some other object's data base by sending rlgoal" messages to other objects. (Of course it may also send other messages to various objects as it so desires since there are no res- trictions as to what may appear on the right hand side of a rule.) Another message defined by daemon-mix-in is ltget-valuelt. This message is used --- to get the value of an instance variable. If the - value of the variable is tlUNBOUND1l, then a goal is asserted into the object's data base to compute the value of the variable. This provides a mechanism for slots to be filled in as their values are needed. As an example of using production rules together with flavors, consider the problem of mon- itoring the usage of files in an operating system. Suppose we want to provide users with the ability to attach daemons to their files and directories specifing actions to be taken any time the file is read from, written to, edited or executed. For example, there might be a file regularly modified by a group of people. A daemon attached to the file could warn anyone who wants to edit the file in case someone else is already editing the file. Also, a system maintainer might monitor a utility for the purpose of profiling its users. He could post a daemon on that utility that would write a message to a log file whenever someone ran the pro- gram. This message would give the name of the user and the form of the call. (An operating system which has these capabilities and more is described in [Israel 821.) YAPS in conjunction with flavors does this by defining a "file" flavor and mixing in the daemon-mix-in flavor to get daemons attached to the files. - Then whenever a file request was made, a message could be sent to that object in the form of a goal and appropriate actions taken, including, most likely, filling the request. 5. Conclusion YAPS is an alternative to OPS5 as an antecedent-driven production system. It is compar- able to OPS5 in terms of efficiency but allows greater flexibility in facts in the data base and in writing production rules themselves. It is also particularly suitable as the basis for a production rule system which both manipulates objects and has objects which manipulate rules. YAPS does not make the mistake of forcing a system to be completely encoded using production rules or to be controlled at the top level by production rules living in the system. Instead, YAPS is a flexible tool, which combines with other lisp tools to build systems which can take advantage of using production rules in just those places where they are needed. Acknowledgements I would like to thank Randy Trigg for reading a prior draft of this paper and the Maryland AI Group for their support. REFERENCES [Allen et al 821 Allen, E., R. Trigg, and R. Wood, Maryland Artificial Intelligence Group Franz Lisp Environment, University of Maryland CS TR- 1226, October 1982. [Allen 821 Allen, E.M., YAPS: Yet Another Production System, University of Maryland CS TR-1146, February 1982. [Foderaro 803 Foderaro, J.K., The Franz LISP Manual, Regents ---- of the University of California, 1980. [Forgy 791 Forgy, C.L, On the Efficient Implementation of Production Systems, Ph.D. Thesis, Dept. of Computer Science, Carnegie-Mellon Univ., Feb. 1979. [Forgy 811 Forgy, C. L., OPS5 User's Manual, Carnegie- Mellon University CMU-CS-78-116, 1981. [Israel 821 Israel, B., Customizing a Personal Computing Environment Through Object-Oriented Program- ming, University of Maryland CS TR-1158, March 1982. [Joy et al 811 Joy, W.N., R.S. Farby, and K. Sklower, UNIX Programmer's Manual, Dept. of Electrical Engineering and Computer Science, Univ. of California, Berkeley, CA, June 1981. [Weinreb & Moon 811 Weinreb, D. and D. Moon, Objects, Message Passing, and Flavors, pp. 279-313 in Lisp Machine Manual, Massachusetts Institute of Technology, Cambridge, MA, March 1981. [Wood 821 Wood, R.J., Franz Flavors: An Implementation of Abstract Data Types in an Applicative Language, Dept. of Computer Science, Univ. of Maryland, TR-1174, June 1982.
1983
15
206
SPECIFICATION-BASED COMPUTING ENVIRONMENTS Robert Baker, David Dye Matthew Morgenstern, Robert r7 Neches USC/Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90291 Abstract This paper considers the improvements that could result from basing future computing environments on specification languages rather than programmtng languages. Our goal is to identify those capabilities which will srgnificantly enhance the user’s ability to benefit from the computing environment. We have identified five such capabilities: Search. Coordination. Automation, Evolution. and Inter-User Interactions. They will be directly supported by the computing environment. Hence. each represents a “freedom” that users will enjoy without having to program them (i.e., be concerned with the details of how they are achieved). They form both the conceptual and the practical basis for this computing environment. A prototype computing environment has been built which supports the first three of these capabilities and which supports a simple but real service. introduction This paper considers the improvements that could result from basing future computing environments on specification languages rather than programming languages. Our goal is to identify those capabilities which will significantly enhance the user’s ability to benefit from the computing environment. We have identified five such capabilities: Search. Coordination, Automation, Evolution. and Inter-User Interactions. They will be directly supported by the computing environment (the first three have been implemented in a prototype). Hence. each represents a “freedom” that users will enjoy without having to program them (i.e., be concerned with the details of how they are achieved). They form both the conceptual and the practical basis for this computing environment, for to the extent that we are successful in providing them as freedoms (specifications rather than algorithms), and hence tower the “wizard” level of users, we must provide corresponding automatic compilation techniques to keep this environment responsive, and hence, useable. None of these freedoms is by itself new. Our contribution lies in their combination and use as the basis for a specification based computing environment. The ideas presented here have evolved from the efforts and philosophy of the SAFE group at ISI, particularly the development of the formal specification language of GIST and the ability to map it via transformations into efficient implementations. We are deeply appreciative of Neil Goldman’s contributions to both the conceptual design and implementation of this effort This research was supported by Defense (DARPA) contract MDA903-81 C-0335 Advanced Research Projects Agency There are some obvious dependencies among these freedoms. and this decreases the number of mechanisms needed to support them. This mechanism sharing is aescribed In the lmplementatlon section following consideratron of the freedoms themselves. Computing Environment Freedoms Search The main activity m a computing environment is building and mampulating various types of objects. Many of these objects are persistent--their lifetime exceeds, and is independent of. the programs that build and manipulate th&m. For objects to be persistent, they must be stored somewhere so that they can be reaccessed later. Current storage and retrieval mechanisms are inadequate and require detailed programmmg. Files are neither appropriately sized nor adequately Indexed to be used as containers for objects. External databases have strong limitations on the types of objects that can be stored (and on the manipulations that can be performed on stored objects). Objects stored in a programming environment are idrosyncraticatly indexed and retrieved. Consider instead an environment. based on the database viewpoint, which houses a universe of persistent objects within the environment itself and which provides descriptive access to those objects. That is, rather than using some predefined criteria. ANY combination of attributes, properties, and relations can be used to access an object (or set of objects if the request was not specific enough). Objects housed within the environment can be manipulated by the full power of that environment. Any modification causes them to be automatically reindexed for later descriptive reference. This, of course, describes a fully associative entity-relationship database [Chen79] integrated with a programming language that creates and maniputates the objects in that database. All objects in the environment are represented in the database (a one-level virtual store) in terms of their relationships (including entity-class) with other objects. The only changes that can occur in this universe of objects are the database operations of creating and destroying object instances, and asserting or denying relationships between objects. By requiring all the objects of the environment to be housed in the database, by imposing a full associativity requirement on that database, and by expressing the services of the environment totally in terms of the object (i.e.. database) manipulations they perform (that is. by lntegratmg the processing with the database), users would be freed from having to predetermine how objects ought to be indexed so that they can be later retrieved. and from programmmg their retrlevai from that 12 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. predetermined structure. Much of the complexity and difficulty of usmg current environments arises from the care and feeding of such “access structures”. in this new environment. any classification structure merely becomes additronal properties of the object which can be used, like any others. as part of a descrrptive reference to that object. Coordination (Consistency) Given the ability to create and manipulate persrstent objects and to access them descriptively. the next most important capability is to coordinate sets of such objects _- that is. keep them consistent with one another. Whenever one object in such a coordinated set changes. the others must be appropriately updated. Currently. we attempt to realize such coordination through procedural embedding. That is. into each service that modifies such an object we insert code to update the others. Since the consistency criteria are not explicit. this currently is necessarily a manual task and is error prone, both in the placement and form of the required update. Such manual procedural embeddings are a key reason current systems are complex. This problem is exacerbated by the fact that the servrces: and the relationships among objects affected by these services. are evolving independently. Consider instead making the coordination rules explicit so that coordinated objects are defined in terms of each other. Each definition is expressed in terms of a mapping (called a perspecfive) which generates a dependent object (called a view) from one or more objects with which it is coordinated. Whenever a coordinated object changes, the view can be updated automatically (a la Thinglab [Borning77] and VisiCalc [Wolverton81]. Views are first-class objects [Kay74. lngalls781: they can be accessed descriptively, and, if the back mapping is defined. they can be modified, causing the appropriate changes in the “defining” objects. (Some of these back-mappings can be inferred automatically [Novak83]; others are underdetermined and must be explicitly defined.) Such coordination represents a major departure from existing systems. Coordtnated objects are tightly coupled. so that changes in one are automatrcally reflected In the others. With such a mechanism. once the coordmation crrteria (mapprngs) are stated. the system could assume full responsibility for maintaining consrstentcy among coordinated objects. Changes to existing services or addition of new ones could be accommodated automatically. Furthermore. the system could then employ lazy evaluation [Friedman761 to delay updating views until those updates were actually required. The reason that the terms, perspective and view. were chosen, respectively. for the mapping and the object produced is that. in addition to its intended use as the mechanism to keep objects coordinated. perspectives will also be used as the mechanism by which a user displays and manipulates objects, Displays are just particular views (which like other views must be kept coordinated with the object being viewed) for which the system knows how to create a picture on the user display screen and how user gestures (whether by entering text. making selections, and/or graphical motion) change the display (and hence, both the picture on the screen and. via a back mapping, the object being viewed). Coordination is thus an extremely powerful mechanism. It not only provides an explicit mechanism for maintaining consistency between objects. but also provides the mechanism by which manipulatable filtered (Le., partial) views could be constructed for both internal and external (display) use. The user interface to this environment would therefore be a set of perspectives (mappings) used for display. Through them the user could observe objects. watch them change. invoke tools and services to manipulate them. or change them himself This user interface would be fully programmable and extensible (see Evolution below). As an example of the power of the coordination mechanism, justified text is just a view of text, and object code is just a view of source code. By defining justification and compilation as the perspectives which produce those views, these processes will be automatically invoked as needed. The maintenance task (coordinating the objects) will shift from the user to the system. Automation In mteractmg with a computtng envrronment many repetitive sequences are employed Programming language based environments provide the ability to bundle such repetitrve sequences as macros and/or procedures. But such macros and/or procedures still have to be invoked explicitly The user is required to remain in the loop having to perform the pattern recognition function and determine when and upon which objects to Invoke the macro and/or procedures. By adding demons to the computing environment. users could be freed from being in-the-loop through automating the way that their environment reacts to specified situations Those situations would become the firing pattern of the demons. and the responses become their bodies This would allow users to define active “agents” operating on their behalf which autonomously monitor the computing environment for those situations for which a response has been defined. This freedom allows users to focus their attention on the more idiosyncratic aspects of the computing while their agents handle the more regularized ones. In particular. these agents could operate in the absence of the user. responding to interactions initiated from other user’s environments (see Inter- User Interactions below). This automation mechanism not only frees users from repetitive tasks, but also changes their perception of their environment, First. it emphasizes the data base orientation of the environment by basing responses on situations (the state of some set of objects) rather than on the processes (code) that produced those srtuations. As we will see in the next section. this data base orientation greatly facilitates evolution of the tools and services in the environment. Second. these responses convert the previously passive environment into an active one. As an example of automation, consider an agent which responds to the arrival of a message by presorting it for the user into some predefined catagory on the basis of the sender. the topic, and/or the content of the message. and then decides whether to inform the user of its arrival based on the user’s current activity. Evolution (Perspecuity) One of the key problems with traditional computing environments is the mabilrty to modify the tools and services of those environments. Programming language based environments improve this situation by coding the tools and services in the language of the environment (with which the user IS necessarily familiar) and by making the source code available to the user. To the extent that the user can understand the tools and services. he can modify them. Once the commitment has been made to provide accessible source code, evolvability is ‘almost completely an 13 understandability issue. This is another way that adopting a specification-based approach has a big payoff. Besides alleviating implementation concerns. each of the specification freedoms improves understandability by allowing the code to more closely describe intent rather than implementation. As a prime example. consider the use of the “automation” demons. described in the previous section. to provide situation- based extensions. Rather than procedurally embedding the extension at each appropriate place in the existing tools or services, a single demon is created that specifies when. in terms of the objects in the environment (i.e.. a situation). the extension is appropriate. By localizing the extension and specifying the situation to which it is to be applied. the understandability of the resulting service is greatly enhanced. We believe that such rule- based technology has much wider applicability than expert systems. But tool and service understandability need not be based solely on the readability of the source code. These tools and services manipulate objects in the environment. That is, they have behavior, and that behavior provides a strong basis for understandability [Balzer69]. By making the behavior explicit in the form of a recorded history (as an object in the environment) the full power and extensibility of the viewing (coordination) mechanism could be used to understand the recorded behavior. The recorded history would include attribution so that the old debugging problem of determining how an object reached its current state and who was responsible for it will finally be resolved. Recording history is a major design commitment of our computing environment which provides the basis for its behavior based understandability. To the extent tnat we are successful in providing an evolvabie. integrated. and automated computing environment. the need for such behavior based understanding will correspondingly increase. The recorded history also provides the basis for an important habitability feature--the ability to undo operations [Teitleman72]. There are three reasons why such a capability IS crucral. First. we are fallible--fro’m lack of forethought or just plain carelessness. Second. no matter how consrstent and well integrated the environment is. we will occasionally be unpleasantly surprised at the effect of an operation. or the situation in which it was invoked. Finally, users need a convenient way to experiment, to learn about unfamiliar servrces. to debug their own additions to the environment. and simply just to see the effects of some course of action. For all these reasons. an undo mechanism which can be invoked after the operation(s) to be undone IS a crucial habitability feature (as shown by its popularity and use in the Interlisp [Teitleman 781 environment). Such a facility can be easily constructed from the recorded history. Inter-user Interaction So far we have examined the freedoms of search. coordination. automation, and evolution. These four freedoms resolve the major difficulties encountered within a computing environment. But our future computing environments cannot be self-contained. They must interact with the environments of other users and with various shared services. As was the case when we considered persistent objects, files are an inappropriate mechanism (though they are the basis for existing inter-user interactions). Inter-user interactions require no less powerful nor rich a set of capabilities than those needed within a single environment. Objects need to be accessed. coordinated. and manipulated across environment boundaries. The boundary between environments has to be suppressed so that the full power of the computing environment can be applied to inter-user interactions. One remaining issue must be addressed. Within someone else’s environment, our rights and privileges are very different from those within our own. Within our own environment, we can do as we please--accessing any object, manipulating it, and defining the rules of consistency which it must obey. Within someone else’s environment. we have no rights and privileges We must ask permission for anything within someone else s environment We do this by dividing the notion of ar, active ObJeCt ]Kay74. Hewitt771 into an active mtermeoiary (programmed agent) and a (passive) ObJeCt owned by that intermediary. If we are manipulating (including accessing) an Object that we own. then the manipulation is performed directly. However. an attempt to manipulate someone else’s ObJeCt IS treated as (i.e.. translated to) a request to the owner of that object. which can be either honored or refused. This specification freedom enables ObJeCt owners to define external access and manipulation rights that allow others to manipulate objects without respect to environment boundaries as long as they don’t exceed those rights. Privacy and/or access can be programmed on a local object-by-object basis and can be both state and requestor dependent. Beyond Freedoms: General Support In addition to the specification freedoms described above. two other capabilities must be available within the computing environment to simplify service creation and improve the habitability of the environment. First is a comprehensive set of general object manipulations. Since the main activity in any computing environment is building and manipulating ObJeCtS. such a set of widely applicable object manipulations is essential [Goldstein80]. These manipulations include object definition (since the class of object types is not fixed), instantiation (since the set of objects of each type is not fixed). exammatron (often called browsing in interactive systems), modification. and destruction. To the extent that traditional services have employed idiosyncratic versions of these capabilities. providing a comprehensive set of widely applicable object manipulations will reduce service implementation effort while improving the consistency and coherency (and hence habitability) of the environment. As an example of such a reduction. consider an electronic mail service. The only portions of this service which must be specially built are the definition of the object message and the mail service specific operations of sending a completed message (transferring a copy to each of its addressee attributes) and answering a message (partially constructing a message with the addressees and the beginning of the body (“In reply to your message o?...“) filled In). All of the other capabilities normally associated with a mail service such as comparing messages examining them. editing them. filing them retnevmg them deleting them. etc.. are provided through the general object manrpulation capabilities of the environment Clearly. such reductions In the scope of service implementatron greatly facilitate the creation of new services. 14 The second additional capability required within the computing environment is a suitable user interface. As previously discussed under the coordination freedom. the user interface will be a set of perspectives (mappings) used to display and manipulate objects. By defining a “service invocation” as an object it can be instantiated. displayed. and manrpulated by this Interface. and by definrng a service on such objects which Invokes the named service on the specified objects (parameters). then this interface can be used as a “command interpreter” to specify the parameters needed for some service and to Invoke it. In addition. since a wide variety of views will already be needed for user browsing. these same views can be used to display the effects of services. In fact, smce all the effects of a servrce invocation are recorded in the history. a much more sophrstrcated displa) mechanrsm can eventually be created. external to the services. which examines the effects and determines what to drsplay based not only on these effects, but also the current user context including what is currently displayed on the screen and on various user declarations of personal preference. By removing both input (service invocation) and output (how to display effects) from service definitions. their scope will be reduced to a kernel consisting of only the functional object manipulation effects of the service. This will greatly simplify service creation while simultaneously providing a more powerful comprehensive user interface. Implementation A working prototype of this computing environment exists. A small but real service has been constructed. This service maintains a portion of the ISI employee data base including such information as office, phones, secretary, directory name and electronic mail location. It uses coordination rules to ensure that a person’s backup phone is the primary phone of his secretary and that the person’s primary phone is the phone in his office. It uses an automation rule to send a message to the receptionist whenever someone’s office is changed It also includes a service specific view which generates an updated phone-list mcorporatrng all of the above information In a predefined format. We hope to maintain thus data base through our specification based computing environment once the prototype becomes sufficiently robust. Three of the five freedoms (search. coordination. and automation) have been implemented. All three are based on existing AP3 [Goldman821 capabilities. The prototype currently “compiles” the service into the corresponding AP3 calls. Coordination and automation both translate into AP3 demons. The AP3 demons mechanism itself piggybacks on AP3’s associative database retrieval mechanism. So all three implemented freedoms rely upon this single powerful facility. In addition, both aspects of the “General Support”--a comprehensive set of object manipulatron facilities and a (primative) interactive user interface--have been built. These object manipulation facilities enable one to interactively view modify, and extend both instances of objects and object definitions themselves. Our current efforts are focused on creation of a suitable language for expressing actions, coordmatton. and automations and on recording history so that we can address the comprehension and modifiability requirements of the evolution freedom. Once we have completed the conceptual framework, a major effort will be focused on optimizing the specification freedoms introduced. especially coordination to eliminate unneeded recalculations and to incrementally update those that are required. Conclusion We have examined current computing environments and tried to understand the causes for their limitations, particularly in the areas of integration and habitability. Operating system based computing environments must be integrated at the subsystem level. The narrow communication channel imposed via files (whether real or in-core) appears to fundamentally preclude tight integration. The situation is very different for programmmg language based computing environments They appear structurally ideal for tight integration Arbitrary oojects can be defined and shared The full range of control structures In the programming language can be used to tie tools and services together While this programmrng- language basis is adequate for rntegratron It causes habitability problems. The mechanisms are simply too low level (detailed) for the computrng environment task. Rather than describing what to do. users must program how to do it. precisely because they are dealing with a programming language The obvious solution is to augment the computing environment language with higher level specificat/on constructs. Each such construct represents a freedom that users can enjoy (because they no longer have to program the construct) and a responsibility the system must accept to provide an efficient implementation of the construct to keep the environment responsive We have identified five such freedoms. They are: 1. Search--the ability to locate objects via descnptrve reference. 2. Coordination--the ability to state the consistency criteria among objects and to have it maintained as any of them are changed. 3. Automation--the ability to define the autonomous response to specified situations so that the user need not remain in the loop for repetitive operations. 4. Evolution--the ability to modify and extend existing services through increased perspecuity of those services and their behavior. 5. Inter-User Interaction--the ability to determine how others will be allowed to access your objects. as they determine. None of these freedoms is, by itself, new. Our contribution lies in their combination and use as the basis for a specification based computer environment. We have no doubt that such freedoms. together with a comprehensive set of general object manipulations and user interface capabilities. will greatly facilitate service creation and markedly improve the habitability of future computmg envrronments. These freedoms must oe supported wrth efficient mechanrsms. Two mechanisms seem most crucial. The frrst IS an adaptive associative entity-relationship database This requrred mtegration of techniques developed In the database. programmmg language and artificial mtelligence ftelds [Goldman821 and has been used to Implement tne first three freedoms. The second IS view maintenance. It requires the 15 integration of techniques for obsolescence detection. lazy (and opportunistic) evaluation. generation of back-mappings. and. most important. incremental update. The open question is how long it will take to provide this underlying support technology. Our working prototype is merely a first step. All the hard optimization problems and many of the conceptual modeling ones are still ahead of us. References [Balzer 691 R. Balzer. “Exdams - Extensible Debugging and Monitoring Systems”, Proceedings of the Spring Joint Computer Conference, 1969, pp. 567-580. [Borning 771 A. Borning, “Thinglab -- An Object Oriented System for Building Simulation Using Constraints”, Proceedings of the Fifth International Joint Conference on Artificial Intelligence, Cambridge, Mass., Aug. 1977. [Chen 791 P. P. Chen (ed.), Proceedings of the International Conference on Entity-Relationship Approach to Systems Analysis and Design, Los Angeles, Dec. 1979. [Friedman 761 D. P. Friedman and D. S. Wise, “CONS Should Not Evaluate Its Arguments”, in Michaelson and Milner (eds.). Automator, Languages, and Programming. Edinburgh University Press, 1976, pp. 257-284 [Goldman 821 N. M. Goldman, “AP3 Reference Manual”, USC/Information Sciences Institute, June 1982. [Goldstein 801 I. Goldstein and D. Bobrow, “Descriptions for a Programming Environment”, Proceeding of the First Annual Conference of the Amencan Association for Artificial Intelligence, Stanford, Calif., 1980. [Hewitt 771 C. E. Hewitt and H. Baker, “Laws for Communicating Parallel Processes”, Proceedings of /F/P-77, Toronto, Aug. 1977 [lngalls 781 D. Ingalls. “The Smalltalk- Programmmg System: Design and Implementation” In 5th ACM Symposrum on Pnnciples of Programming Languages. ACM 1978 [Kay 741 A. Kay. “SMALLTALK. A Communication Median for Children of All Ages”. Xerox Palo Alto Research Center. Palo Alto. Calif. 1974 [Novae 831 G. Novak. Jr.. “Knowledge-based Programming Using Abstract Data Types”. AAAi Proceedings 3rd Nat/ona/ Conference on Artif/cia/ Inrelhgence. Wash.. D.C. 1983. [Teitelman 721 Warren Teitelman, “Automated Programming . The Programmer’s Assistant”, Proceedings of the Fali Joint Computer Conference. Dec. 1972 [Teitelman 781 Warren Teitelman lnterllsp Reference Xerox Palo Alto Research Center. Oct. 1978 [Wolverton 811 Van Wolverton. IBM Personal Software Inc.. 1981. Personai Manual. Comwter VksiCalc,
1983
16
207
MASSIVELY PARALLEL ARCHITECTURES FOR Al: METL, THISTLE, AND BOLTZMANN MACHINES Scott E. Fahlman & Geoffrey E. Hinton Computer Science Department, Carnegie-Mellon University Pittsburgh PA 15213 Terrence J. Sejnowski Biophysics Department, The Johns Hopkins University Baltimore MD 21218 ABSTRACT It is becoming increasingly apparent that some aspects of intelligent behavior rcquirc enormous computational power and that some sort of massively parallel computing architecture is the most plausible way to deliver such power. Parallelism, rather than raw speed of the computing elements. seems to be the way that the brain gets such jobs done. But even if the need for massive parallelism is admitted, there is still the question of what kind of parallel architecture best fits the needs of various AI tasks. In this paper we will attempt to isolate a number of basic computational tasks that an intelligent system must perform. We will describe several families of massively parallel computing architectures, and we will see which of these computational tasks can be handled by each of these families. In particular, we will describe a new architecture, which we call the Boltzmann machine, whose abilities appear to include a number of tasks that are inefficient or impossible on the other architectures. FAMILIES OF PARALLEL ARCHITECTURES By “massively parallel” architectures, we mean machines with a very large number of processing elements (perhaps very simple ones) working on a single task. A massively parallel system may be complete and self-contained or it may be a special-purpose device, performing some particular task as part of a larger system that contains other modules of a different character. In this paper we will focus on the computation performed by a single parallel module, ignoring the issue of how to integrate a collection of modules into a complete system. * Scott Fahlman i* 3 supported by the Defense Advanced Research Projects Agency, Department of Defense, ARPA Order 3597, monitored by the Air Force Avionics Laboratory under contract F3361581-K-1539. The other two authors are supported by grants from the System Development Foundation. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government. One useful way of classifying these massively parallel architectures is by the type of signal that is passed among the clcmcnts. Fahlman (1982) proposes a division of thcsc systems into three classes: marker- passing, valuc-passing, and message-passing systems. Message-passing systems are the most powerful family, and by far the most complex. They pass around mcssagcs of arbitrary complexity, and perform complex operations on these messages. Such generality has its price: the individual computing clcmcnts are complex, the communication costs are high, and there may be severe contention and traffic congestion problems in the network. Message passing dots not seem plausible as a detailed model of processing in the brain. Such models are being actively studied elsewhere (Hillis, 1981; Hewitt, 1980) and we have nothing more to say about them here. Marker-passing systems, of which NETL (Fahlman, 1979) is an example, arc the simplest family and the most limited. In such systems, the communication among processing elements is in the form of single-bit markers. Each “node” element has the capacity to store a few distinct marker bits (typically 16) and to perform simple Boolean operations on the stored bits and on marker bits arriving from other elements. These nodes are connected by hardware “links” that pass markers from node to node, under orders from an external control computer. The links arc, in effect, dedicated private lines, so a lot of marker traffic can proceed in parallel. A node may be connected to any number of links, and it is the pattern of node-link connections that forms the system’s long-term memory. In NETL, the elements are wired up to form the nodes and links of a semantic network that represents some body of knowledge. Certain common but computation-intensive searches and deductions arc accomplished by passing markers from node to node through the links of this network. A key point about marker-passing systems is that there is never any contention due to message traffic. If many copies of the same marker arrive at a node at once, they are simply OR’ed together. Value-passing systems pass around continuous quantities or numbers and perform simple arithmetic operations on these values. From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Traditional analog computers arc simple valuc-passing systems. Like marker-passing systems, value-passing systems never suffer from contention. If several values arrive at a node via different links, they are combined arithmetically and only one combined value is received. Many of the iterative relaxation algorithms that have been proposed for solving low-level vision problems are ideally suited to value- passing architectures, and so arc spreading-activation models of semantic processing (Davis and Rosenfcld, 1981; Anderson, 1983). At CMU we have done some preliminary design work on a machine that we call Thistle. This system combines the marker-passing abilities of NETL with value-passing. Each clement of the Thistle machine has storage for 16 single-bit markers and 4 eight-bit values. The values can be added, multiplied, scaled, and compared to one another. Links in the Thistle system pass a value from one node to another, perhaps gated by various markers and multiplied by a “weight” associated with the link. In Thistle, the values converging on a node can be summed or combined by MIN or MAX. Both NETL and Thistle use a Ima2 representation for their knowledge: each concept or assertion resides in a particular processing element or connection. If a hardware clement fails, the corresponding knowledge is lost. It has been suggcstcd many times that a distribuled representation, in which a concept is represented by some pattern of activation in a large number of units, would be more reliable and more consistent with what is known about the workings of the brain. Such systems are harder to analyze, since the behavior of the system depends on the combined action of a large number of elements, no one of which is critical. However, distributed systems offer certain computational advantages in addition to their inherent reliability. The Boltzmann architecture, described in the next section, is a variant of the value-passing architecture that uses distributed representations and probabilistic processing elements. The randomness is actually beneficial to the system, allowing it to escape from local minima during searches. THE BOLTZMANN MACHINE The Boltzmann architecture is designed to allow cflicicnt searches for combinations of “hypotheses” that maximally satisfy some input data and some stored constraints. Each hypothesis is rcprcsentcd by a binary unit whose two states rcprescnt the truth values of the hypothesis. Interactions between the units implement stored knowledge about the constraints between hypothcscs, and cxtcrnal input to each unit represents the data for a specific case. A contcnt- addrcssablc memory can be implemcntcd by using distributed patterns of activity (large combinations of hypotheses) to stand for the kinds of complex items for which we have words. New items are stored by modifying the interactions bctwccn units so as to create new stable patterns of activity, and they are rctricved by settling into the pattern of activity under the influence of an cxtcrnal input vector which acts as a partial description of the required item. A good way to approach the best-fit problem is to define a measure of how badly the current pattern of activity in a module fits the external input and the internal constraints, and then to make the individual hardware units act so as to reduce this measure. Hopfield (1982) has shown that an “energy” measure can be associated with states of a binary network, and we generalize this measure to include sustained inputs from outside the network: E= - l/2 ): WoSiSj - x (qi-ei)Si ij i (1) where q1 is the external input to the ifh unit, wg is the strength of connection (synaptic weight) from the jfh to the irh unit, s, is a boolean truth value (0 or l), and ei is a threshold. A simple way to find a local energy minimum in this kind of network is to repeatedly switch each unit into whichever of its two states yields the lower total energy given the current states of the other units. If hardware units make their decisions at random, asynchronous moments and if transmission times are negligible so that each unit always “sees” the current states of the other units, this procedure can only dccrcasc the energy, so the network must settle into an energy minimum. If all the connection strengths are symneirica~, which is typically the cast for constraint satisfaction problems, each unit can compute its effect on the total cncrgy from information that is locally available. The difference bctwecn the energy with the k* unit false and with it true is just: AEk = C wkisi + qk-8k i So the rule for minimizing the total cncrgy is to adopt the true state if the combined external and internal input to the unit cxcceds its threshold. This is just the familiar rule for binary threshold units. It is possible to escape from poor local minima and find better ones by modifying the simplc’rule to allow occasional jumps to states of higher energy. At first sight this seems like a messy hack which can netcr guaranfee that the global minimum will be found. However, the whole module will behave in a useful way that can be analyzed using statistical mechanics provided that each unit adopts the state with a probability given by 1 Pk= 1 + e-AEk/T (3) 110 where T is a scaling parameter that acts like the temperature of a physical system. This rule, which resembles the input-output function for a cortical neuron (Hinton and Scjnowski, 1983a). ensures that when the system has reached “thermal equilibrium” the relative probability of finding ‘it in two global states is a Boltzmann distribution and is therefore determined solely by their energy difference: In what follows, we will focus on tasks that have to do with recognition and search in a very large space of stored descriptions, but a key point is that these abilities arc also important in planning and inference. For example, the various recognition processes described here may be used to select rules and actions in some sort of production system. In such systems, scqucntial behavior would be driven by a series of massively parallel recognition steps. &L =,&-&W- PP (4) Set Intersection If T is large, equilibrium is reached rapidly but the bias in favor of the lower energy states is small. If T is small. the bias is favorable but the time required to reach equilibrium is long. One way to beat this trade-off is to start with T large and then reduce it (Kirkpatrick, Celatt, & Vecchi, 1983). An important consequence of achieving a Boltzmann distribution is that it allows several simple learning rules which modify the probability of a global state by modifying the individual connection strengths. At equilibrium, the probability of a state is a simple function of its energy (Eq. 4), and the energy is a linear hnction of the weights between pairs of units that are active in that state (Eq. 1). This allows us to compute the derivative of the probability of a global state with respect to each individual weight. Given this derivative, the weights can be changed so as to make the probabilities of global states approach any desired set of probabilities, and so it is possible to program a Boltzmann maChine at the level of desired probabilities of states of whole modules, without ever mentioning the weights (Hinton & Sejnowski, 1983a). This kind of deliberate manipulation of probabilities requires a “programmer” who specifies what the probabilities should be. A more powerful learning procedure that does not require a “programmer” is also possible in these networks. The procedure modifies the weights so as to generate good internal models of the structure of an environment. There is not space here to describe this procedure (see Hinton & Sejnowski, 1983b for details). COMPUTATIONAL PROBLEMS One recurrent theme in the history of AI is the discovery that certain aspects of intelligence could be modeled in some elegant way, if only we had enough computing power. Once a task is understood in these terms, the search begins for ways to provide that power or to come up with tricks that reduce the amount of computation required. Massive parallelism provides us with a new tool for attacking some of these computational problems. In this section WC will identify some fundamental computational abilities that any truly intelligent system will have to possess, and we will SW how well the parallel architectures described above can handle each of these tasks. Recognition can be viewed as the process of finding, in a very large set of stored descriptions, the one that best fits a set of observed features. In its simplest form, this can be viewed as a set-intersection problem. Each observable feature is associated with a set of items that exhibit that feature. Given a number of observed features, we want to find the item or items in memory that exhibit all of these features; that is, we must intersect the sets associated with the observed features to find the common members. This set-intersection operation is discussed at length in Fahlman (1979). It is a well-defined operation that comes up very frequently in AI knowledge-base systems. On a serial machine, set-intersection takes time proportional to the size of the smallest of the sets being intersected, but frequently all of the sets are quite large. In a parallel marker-passing system such as NETL, such set intersections are done in a single operation, once the members of each set have been marked with a different marker. The system simply asks (in a single cycle) for elements that have collected all of the markers. Value-passing systems can do as well by marking the members of each set with one unit of activation and then looking for units whose activation is over some threshold. The Boltzmann machine can also intersect sets in a single settling, at least in simple cases. Consider, for instance, a representational scheme in which each active hardware unit represents a very large set -- the set of all items whose patterns have that unit active. A more specific set is rcprescnted by a combination of active units, and the intersection of several specific sets is represented by the union of these combinations. Tht union of the active units acts as an intensional representation of the mtorscction -- it can be formed cccn if no known item lies in all the sets. Given this intcnsional description, the problem of finding the item that fits it is just the problem of activating the additional units in the pattern for that item. This is the kind of pattern completion task which the Boltzmann machine can solve in a single settling (Hinton, 1981a). Transitive closure In knowledge-base systems it is frequently necessary to compute the closures of various transitive relations. For example, WC might need to mark all of the animals in the data base, perhaps because we want to intcrscct this set with another. If the “is a” relation is transitive, a rcptilc is an animal, and a lizard is a reptile, then lizards are animals. We must therefore mark not only those items whose membership in the animal class is explicitly stated, but also those that inherit this membership through a chain of “is a” statcmcnts. The “is a” relation is the most important of the transitive relations in most data bases, but we might also want to compute closures over relations such as “part of’, “bigger than”, “later in time”, etc. In a serial machine, the computation of a transitive closure requires time proportional to the size of the answer set. In a marker-passing machine, it takes time proportional to the length of the longest chain of relations that has to be followed. If the relations form a single long chain these times are identical, but if they form a short bushy tree, the marker-passing system can be very much faster. Value-passing systems that use local representations can simulate marker-passing systems on this task, and so get the same sort of performance. The Boltzmann architecture does not handle this task so cleanly. Closure over the “is a” relationship can be handled by making the pattern of active units for an item include the patterns for all items above it in the type hierarchy. By starting with a part of this pattern and completing it (that is, dropping into an energy minimum in which additional units are turned on) we can in effect compute the closure of “is a”. However, it is not yet known whether this technique will work for data bases with very large, tangled type hierarchies, and it cannot be simply extended to handle additional transitive relations such as “part of ‘. Hinton (19Slb) describes an encoding of “part of’ hierarchies in a Boltzmann-like system, but in that model the “part of’ hierarchy must be traversed sequentially. Contexts and partitions Some information in a knowledge base is universal, but much of it is valid only in certain contexts: times, places, imaginary worlds or hypothetical states. At any given time, the system is working within some set of nested and overlapping contexts: it must have access to the bundle of information associated with each of those contexts and to the universal information, but not to information that is only valid in other contexts. Each context acts like a transparent overlay to the knowledge base, adding a bundle of new facts or occasionally covering something up. In the presence of multiple overlapping partitions, a serial machine must check each assertion for membership in one of the active partitions before that assertion can be used. This can be a time- consuming task. Marker-passing systems handle this easily. The tree of active contexts is marked using the transitive closure machinery. This mark is then propagated to all of the assertions associated with these contexts, activating them: assertions without this mark are inactive in subsequent processing. In effect. WC are using one set of markers to gate the passage of other markers: many simple Boolean operations are performed during each cycle. The value-passing and Boltzmann architectures have similar abilities: the state of some units can cause other units to behave normally or turn off. In these systems we can also fade contexts in and out gradually, if that is what the problem requires. (See Berliner, 1979) Best-match recognition The set-intersection computation described above is sufficient if the features are discrete, noise-free, and if every member of a class exhibits all of the associated features. Few real-world recognition tasks approach this ideai. More often, the task is to find the stored description that best matches a set of features, even if the match is imperfect. Some of the features may be observed with high confidence, while others are weak. Some observations my fall on the boundary between two features or may be smoothly continuous. Marker-passing systems are very poor at handling imperfect matches of this sort. Value-passing systems like Thistle are idcal for this: there can be a very large number of observations, each sending some amount of activation to a number of hypothcscs; the size of this activation depends on the confidence level of the observation and the strength of the connection between the feature and the hypothesis. Hypotheses may also be given some extra activation on the basis of top-down expectations. After all of thcsc votes have been collected, the system simply asks for the clement with the most activation to identify itself -- this is our best match. The Boltzmann machine does almost as well as Thistle in cases like this: in clear-cut casts it finds the global energy minimum corresponding to the description that best fits the wcightcd combination of observed features and expectations. If there are several good descriptions it is biased towards the best. Gestalt recognition In the preceding paragraphs we looked only at bottom-up recognition, perhaps modified by a bit of top-down priming to help expected answers. Real-world recognition problems present a more complicated picture: the whole object can only be identified on the basis of its features, but the features can only be identified in relation to one another and to the emerging picture of the whole; if taken out of context, each feature is ambiguous (Palmer, 1975). There is usually a single answer -- a set of Identities for the whole and for each of the parts -- that is much better than any other, but this cannot be found by pure bottom-up or pure top-down processing: instead, like the solution of a set of simultaneous equations. it must either emerge as a 112 whole or be found by laborious iteration. There may be many levels of features and sub-features, with a complex network of inter-level constraints. Here the Boltzmann machine is in its element. The observations and expectations provide the inputs to the network. The knowledge about the plausibility of each possible interpretation is stored in the weights within the network. The problem is to combine these sources of information rapidly and correctly. The inputs define one potential energy function over possible states of the network, and the weights define another. The statistically optimal solution can bc found by adding the functions together and finding the global minimum (Hinton and Scjnowski, 1983b). This is exactly what the Boltzmann machine dots. On paper, then. the Boltzmann machine looks very promising for recognition tasks of this sort, but more analysis and some large-scale simulations are needed in order to dctcimine whether this promise is realistic. A detciministic value-passing machine like Thistle might be able to get comparable results, but programming it to do so would be a very difficult task because there is no known learning procedure, and great care would have to be taken to avoid local minima that would trap a deterministic iterative search. Marker-passing systems exhibit the same limitations here that we saw in best-match recognition; they are inappropriate for this sort of task. Recognition under transformation Sometimes the problem is not just to recognize a whole object and its features at once, but to do this even though the object has undergone a complex transformation. In vision, for example, we must match the image against a set of stored, viewpoint-invariant shape descriptions and to do this we must apply transformations like translation, rotation, scaling, and perhaps other, non-rigid transformations (Hinton, 1981c). Once again, we are trying to make many choices at once in order to find a combination of choices that gives us the best match. Some of the choices are made over smooth continuous domains (the transformations) and some are discrete choices (the description chosen from memory). Once again, the Boltzmann machine should excel at this task, but must be tested; the Thistle machine might be able to do the job but would require tricky programming; the NETL machine is out of the game. Many other computational tasks could be added to the list, but these are the ones that currently seem most important to us. None of the architectures we have explored can do a good job on all of these tasks. This analysis suggests two goals for the immediate future: first, to explore more thoroughly the computational properties of the Boltzmann architecture, especially when applied to large real-world tasks; second, to try to find some way to combine, in a single system, the “gestalt recognition” of the Boltzmann machine, the precise set operations of NEIL-style marker passing, anu the flexible sequential behavior of the traditional von Neumann architecture. Acknowledgements We thank the mcmbcrs of the Parallel Models group at CMU and the Parallel Distributed Processing group at UCSD for helpful discussions. References Anderson, J. R. The Architecture of Cognition. Harvard University Press, 1983. Berliner, H. J. On the construction of evaluation functions for large domains. In Proceedings of the 6th international Joint Conference on Artificial Intelligence. Tokyo, Japan, August 1979. Davis, L. S. & Rosenfeld, A. Cooperating processes for low. vision: A survey. Artificial Intelligence, 1981, 3, 245-264. Fahlman, S. E. NETL: A system for representing and using world knowledge. Cambridge, Mass.: MIT Press, 1979. fevel real- Fahlman, S. E. Three flavors of parallelism. In Proceedings of the Fourth National Conference of the Canadian Society for Computational Studies of Intelligence. Saskatoon, Saskatchewan, May 1982. Hewitt, C. E. The apiary network architecture for knowledgeable systems. In Proceedings of the Lisp conference. Stanford, August 1980. Hillis, W. D. The connection Mass: MIT A.I. Lab. 1981. machine. T. R. 646, Cambridge Hinton, G. E. Implementing semantic networks in parallel hardware. In G. E. Hinton & J. A. Anderson (Eds.) Parallel Models of Associative Memory. Hillsdale, NJ: Erlbaum, 1981 a. Hinton, G. E. Shape representation in parallel systems. In Proceedings of the Seventh International Joint Conference on Artificial Intelligence, Vol 2. Vancouver BC, Canada. August 1981 b. Hinton, G. E. A parallel computation that assigns canonical object-based frames of reference. In Proceedings of the Seventh International Joint Conference on Artificial Intelligence, Vol2. Vancouver BC, Canada. August 1981c. Hinton, G. E. & Sejnowski, T. J. Analyzing Cooperative Computation. In Prbceedings of the Fifth Annual Conference of the Cognitive Science Society, Rochester NY, May 1983a. Hinton, G. E. & Sejnowski, T. J. Optimal perceptual inference. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Washington DC, June 1983b. Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences USA, 1982, 79, 2554-2558. Kirkpatrick, S. Gelatt, C. D. & Vecci, M. P. Optimization by simulated annealing. Science 1983, 220, 671-680. Palmer, S. E. Visual perception and world knowledge: Notes on a model of sensory-cognitive interaction. In D. A. Norman & D. E. Rumelhart (Eds.) Explorations in Cognition. San Francisco: Freeman, 1975. 113
1983
17
208
An Object-Oriented Simulator For The ADiarv Henry Lieberman Artificial Intelligence Laboratory Laboratory for Computer Science Massachusetts Institute of Technology Cambridge, Mass. 02139 USA Arpa Network Address: HENRY@MIT-AI Abstract This paper describes a simulator for the proposed Apiary, an object-oriented, message passing parallel machine for artificial intelligence applications, using the QCKV model of computation. The simulator implements an interpreter for the lowest level “virtual machine language” of the Apiary, specifying computations in terms of creating objects and sending messages rather than loading and storing registers. The simulator is itself programmed in the object-oriented style advocated by the actor philosophy, allowing experimentation with alternative implementation mechanisms without disturbing the behavior of the simulation. Technical details in the paper assume some familiarity with object-oriented programming and the actor formalism. Paper category: Support Software and Hardware 1. Should a parallel machine for AI be like a parallel machine for nhvsics? What does it mean to build a machine optimized for artificial intelligence? Let’s look at the process of building specialized machines in other domains. Mathematics and physics, like AI, are areas which have important problems where solutions are limited by constraints on computing power. In these areas, an accepted methodoiogy for optimizing machines involves identifying the inner loop of some intcrcsting problem, a small piece of code that takes a large percentage of computing resources. Then. this inner loop is impleln~nted at as low a level as is feasible. preferably in microcode or directly in hardware. If what’s t&ins the time in your problem is doing FFT’s, build an F‘F7‘ machine. Can Al use this approach? Probably not. We conjcc,ture that AI doesn’t have a simple “inner loop”, that an Al machine will have to be a f%t “general purpose” problem solver. just ::s p:of.lc :~i’c. ‘I lrc tlil‘fcrcnce is that in physics problems the pa~~crns 01‘ LoIllpul:llion tend to be st,ltic and yredrcrtrhie. where:i< in Al the patterns of computation arc likely to be dynamic and thcrcfbre un~~l-t~~rlicl~zDI. An AI program attempting to solve a problem may have no idea which one of a number of heuristics will be useful before it starts to work on the problem. It may even have to learn or invent new solution methods as it goes along. Some specialized algorithms will undoubtedly be useful. such as pattern matching, set intersection and searching, but probably no one algorithm will be so dominant as to warrant tuning an Ai machine to just that algorithm. So what can you do to optimize a machine for unpredictable computations? First, you optimize the machine to take advantage of large amounts of parallelism. It will soon be more important to take advantage of the potential parallelism in a computation than to minimize the number of machine cycles used by a computation. It is important to optimize for flexibility, avoiding any sort of centralized control which might become a bottleneck. A consequence is that all resources in the machine should be allocated dynamically, including both memory and processor resources. Work should be distributed among parts of the machine as evenly as possible, to take maximum advantage of parallelism. Rather than dedicating special purpose hardware to particular algorithms, it is preferable to have many general purpose processors able to run parts of algorithms as the need arises. Computations should be able to move from processor to processor, even while they are running. Stored objects should be able to move from the memory of one processor to the memory of another processor without affecting programs that use the objects. The programmer should be able to program the machine pretending that an “infinite” number of processors are available, just as garbage collection and virtual memory let the programmer pretend an “infinite” number of memory cells arc available. The system should time-share available physical processors, just as virtual memory systems time-share the use of physical memory. Simple allocation strategies with good average behavior [like the least-rcccntly-used paging algorithm] should bc used to manage resource allocation. These are the design principles that serve as our criteria for a parallel machine for Al. The actor InCJdd of compu&tion, dc::cr-ibcd in [I]. 1.11. [,lJ. 151 1 7rovrdcs ;I 17~~s for dtGgning a machine which will meet thcsc criteria. 241 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. 2. A simulator hcli)s us gain c\pcricnce nith uncon\cntional m!linc architectures The basic von Neumann machine architecture has been around for over thirty years. A tremendous amount of experience in hardware design, systems programming. debugging, and programming style has been built up over the years. Some of this experience will carry over to the new generation of parallel machines. but some of it will not. The construction of a simulator has gained us considerable experience in discovering how a parallel object-oriented machine will differ from the machines of today. The fundamental components of a von Neumann machine are registers, the basic data structures bit strings, and the basic actions load and store of registers. An Apiary will be built of actors as the fundamental units, with message passing as the sole action and means of communication between actors. Accordingly, we have outfitted the simulator with an object- oriented instruction set, where instructions specify the creation of actors and sending of messages. We have arranged that even primitive operations like addition of numbers will obey the message passing protocol, so that new implementations of system data types can always be added to an existing system. The introduction of parallelism requires conceptual changes and provides challenges to our implementation. We have implemented mechanisms for migration of actors and load balancing. We have begun to explore the special problems of debugging programs in a parallel environment, an area long neglected. Details on these issues will follow later in the paper. 3. The simulator itself is an exneriment in object-oriented programming If we believe that object oriented programming is a good general-purpose programming methodology, then it should be good for putting together a simulator for an object-oriented machine! WC arc fortunate, indeed, that the Lisp Machine, on which the simulator is implemented. has many features which support a kind of object-oricntcd programming style; it extends conventional Lisp by adding a new Juror data type. Regrettably, WC cannot use the flavor SEND operation to model mt\~age passing between actors. Primitive Lisp functions like t do not operate on flavors. and the flavor implementation relies on Lisp stacks, making it unusable in the presence of parallelism. There is, of course. a performance penalty in using objects down to a very low level in our machine, but the advantages are numerous, Foremost among them is the ability to experiment with implementation alternatives. As an example, transmission of actors across physical machines is done by sending messages to objects representing the connections between the machines to TRANSMIT and RECEIVE actors. We have two completely different implementations of this: one uses the Chaosnet, a packet-switched local network, the other a dedicated hardware bus coupler. Switching between alternatives does not affect any other code in the simulator. The object-oriented philosophy also facilitates instrumentation of the simulator. Any object can be replaced by a new version with the same message passing behavior, but which also records activities for later display or analysis. without affecting the simulator’s operation. The simulator can record the number of events. number of actors created, average size of actors, and other information for metering performance. 4. Parallel processing is simulated on a serial machine The Apiary simulator runs on one or more Lisp Machines. An simulated Apiary with any number of processors can be run on a single Lisp Machines. or several machines, each physical machine simulating a subset of the Apiary processors. Since a single Lisp Machine is a sequential computer. we must simulate the effect of running several processors concurrently in software. We have opted not to use the Lisp Machine’s PROCESS objects to implement parallelism among Apiary processors, primarily because Lisp Machine process switching is inefficient and because of the lack of debugging tools for parallel programs. Instead, parallelism is simulated by a TICK mechanism. A TICK is the smallest quantum of time in the Apiary, the “cycle time” of a processor. When the object representing a physical processor receives a TICK message, it performs one primitive event, causing an actor to receive a message. The Apiary distributes tick messages among simulated processors. We do not rely on the presence of a global clock, or synchronization between ticks on different processors. 5. The architecture of an .4piarv worker Each individual processor in the Apiary is called a WORKER, and the simulator contains a worker object to represent each processor. Each worker is connected to a list of NEIGHBORS, a small number of other workers in the Apiary. The internal structure of each worker involves several subprocessors; a COMMUNICATIONS PROCESSOR, which sends and receives messages between workers, and one or more WORK PROCESSORS which run programs. GARBAGE COLLECTION processors may perform steps of an incremental, real-time garbage collection in parallel with the work processors [6]. Instead of having registers as in a conventional machine, the “machine state“ of a worker is represented by an object called a TASK. A task is the most fundamental unit of “work to be done” in the Apiary, representing the reception of a single message by a target actor. It is veiy important for a parallel machine that the machine state be encoded in objects of small size. Process switching time can be slowed if the machine must switch between states comprising large numbers of registers. Each worker has a WORK-QUEUE, a list of tasks representing all the computations that the worker may perform concurrently at a given moment. Work processors may take tasks from the queue for execution. The work queue must be synchronized to allow access from more than one work processor. Structure of an Apiary worker 6. The Aniarv instruction interpxAer is based on obiects rather than bit striws Consider a conventional von Neumann machine. The behavior of the machine is usually defined in terms of an instruction interpreter, or virtual machine. This is an algorithm that takes a machine state, defined in terms of the contents of the relevant machine registers, and an instruction in the binary machine language, and ,,ieids a new st:tte of the machine, perhaps by changing registers, the program counter, etc. The state of the machine is represented by an array of indexed memory locations, CXCII one cotitaining a fixed-lcnl;th bit string. The inslruutions are represented by fixed- or \,ari:lble-length bit strings. and cause the contcnls of various memory locations to be altered to obtain the next machine state. The heart of the Apiary consists of an instruction interpreter or “virtual machine” for each work processor. In contrast to von Neumann machines, the memory of Apiary workers is considered to consist, at the virtual machine level, of objects rather than bit strings. Although at the lowest level, objects must be encoded as bit strings. Apiary instructions do not treat them as such. For example, there are no instructions which load and store registers. The instructions themselves are also represented as objects, and the components of instruction objects replace “addressing modes” in conventional instructions. The execution of each instruction object is expected to produce zero or more new instruction objects. An instruction producing only one new instruction corresponds to the case of a traditional machine sequentially executing instructions. More than one new instruction indicates concurrency or “forking”. Finally, an instruction generating no new instructions indicates the termination of a process. This method of implementing the instruction interpreter eliminates the troublesome program counters and side effects to internal registers of conventional machines. FAch instruction object specifies a state transition function, from the task which represents the “old state” to a task representing the “new state” of the work processor. The instruction may also result in creating new actors for components of the new task. The new tasks produced by an instruction may either be placed back on the work queue of the worker from which they came, or sent to the work queues of neighboring workers for load balancing. The lowest-level programs which control what happens when actors receive messages, are called scripts, written in terms of these instruction objects. Scripts are the Apiary’s “microcode”, and are used as the target language for compiling very low level software. Given the “current stare” of the work processor, as embodied in a task object, the script produces a list of one or more instructions. which then produce new tasks, and so on. The instructmn cnterpreterof the Apiary 7. ‘I’he data architcctiire of the Apiklu In some sense. there is only one kind of data object in the Apiary, an actor. The impletncntation, however, distinguishes between rock-bottom actors and scripred actors. A scripted actor is made up a two parts: a procedural part and a data part. The procedural part is a program, the SC r i p t, that tells the actor how to behave when it receives a message. Scripted actors have their scripts stored explicitly as the first component of the actor data structure. 243 What language are scripts written in? In the simulator, primitive scripts are written in the implementation language, Lisp. In a hardware Apiary, the most primitive scripts are written in the machine’s “microcode”, directly accessing hardware primitives. The user may create new objects to serve as scripts, executed by an interpreter written in the implementation language. The acquaintances are the “data part” of an actor. These are a list of actors which are remembered as the actor’s local state. The script may access these actors in forming new tasks. Actors also can migrate from worker to worker. To accomplish this, each actor has a FORWARD-TO component. If non-NIL, all messages intended for that actor are passed along instead to the actor named in the FORWARD-TO part. This may result in sending the message across workers. But not all actors can be represented as a structure containing script and acquaintances. At some point, we must have rock-bottom objects like numbers that can be operated on directly by the hardware without going through the message passing protocol. If a rock-bottom actor appears in the target component of a task, the script for that kind of primitive object is retrieved from a table of such scripts, indexed by type. The “acquaintances” in this case are simply the underlying machine representation of the object. In the simulator, Lisp objects such as numbers, symbols, and lists are rock-bottom actors. In a hardware Apiary, these would be a set of data types distinguished by type codes. For example, the script for a rock-bottom number can receive a message asking it to add itself to another number. It checks its operand to see if it, too, is a rock-bottom number. If so, the machine operation for adding two numbers can be safely used. If the other number is a scripted actor, then it is given the responsibility of figuring out how to perform the add operation by sending the add mess:igc to it, p:Gng in the original tai.get number as an operand. 8. Scriptcr is a high-level “microcode compiler” for writing scrints of actors What corresponds to the “microcode” on an Apiary machine are programs for a set of scripts for primitive actors. These perform the lowest level operations of the machine, like adding two numbers, constructing lists, extracting elements from lists, or changing the bits of the display screen. Programming directly in the language which drives the virtual machine of the simulator is, unfortunately, not very convenient. Because the object-oriented philosophy is pressed to such a low level of the machine, even small programs require code for creating large numbers of objects. A simple FACTORIAL takes about three pages, which is probably the world’s record for the longest FACTORIAL program! A higher level “microcode compiler” called Scripter uses Lisp macros to compile more concise programs to code which drives the simulator [or eventually, hardware] directly. Unlike most current microcode compilers, the code looks more like an object- oriented variant of Lisp than an assembly language. Most function calls are replaced by the message sending primitive ASK. Here is the Scripter code for FACTORIAL, in its entirety: (DEFSCRIPT FACT (N) (IF (ASK N (A ZERO!')) 1 (ASK (ASK FACT (ASK N (A 1-))) (A + (WITH MULTIPLIER W))) Scripter still isn’t a “user-level” language, however, since it doesn’t have an interpreter, and is allowed to “cheat” and call Lisp [eventually, hardware] primitives directly without going through message passing protocol. However. it is the responsibility of any Scriptcr-written scripts to make any “cheating” completely transparent to user-written code. Scripter scripts must always check before performing any primitive operations on objects, and revert to message sending if they encounter user-defined objects. This check will be performed by macros supplied by Scripter. 9. Scriptcr r)ro\ irk.3 werat ser~iccs i\hidl ;ticl the script writer One of the services performed by Scripter is to convert code written in the usual functional style of Lisp to continuation style, automatically creating continuation actors as necessary. Ordinaly function call/return control structure is hidirectional. Whenever a function is called, a return address is pushed on a stack, and popped upon return from the function. Message passing, by contrast. is unidirectional, and there arc no stacks in a message passing machine. The functional style is achieved by continuation passing, where a request event, which corresponds to a function call, contains a customer. The customer is an actor which will receive the returned value of the function as a message in a rep/y event and will “continue” the computation. Scripter automatically figures out which actors need to be saved as acquaintances of customers. and provides syntax for accessing acquaintances of actors using simple variable references. An expression (ASK (ASK TARGET-l MESSAGE-l) (ASK TARGET-Z MESSAGE-Z)) would be translated as Create a REQUEST-INSTRUCTION object, Sending MESSAGE-l to TARGET-l with a new customer CUSTOMER-l. CUSTOMER-l receives a message ANSWER-l, [ANSWER-l is the result of (ASK TARGET-l MESSAGE-l)] And sends MESSAGE-2 to TARGET-Z, With a new customer CUSTOMER-2, which has an acquaintance ANSWER-l. CUSTOMER-Z receives the result of (ASK TARGET-2 MESSAGE-L), And sends it to ANSWER-l. Scripter provides macros which abstract out common patterns of message passing. For example, to stick to our uniform actor protocol, conditionals must be done by message passing. Scripter provides an IF macro which replaces the traditional T-or-NIL test with sending an IF message containing the two altemativcs to the result of the predicate part of the conditional. Scripter tries to make the translation between source code and simulator code reversible, to aid debugging. Each piece of tr;lnslatcd code has a component which stores the source code which produced that target code. Customers created by Scripter rcmembcr the source code which produced the value which they rccci ve. The correspondence between source and target code is not one-to-one, since some Scripter constructs may produce more than one instruction for the simulator. The ability to even partially reverse the transformation performed by Scripter has proven valuable in debugging Scripter’s output. IO. The simulator incornorates a window-oriented “machine How do we tell if the simulator is performing correctly? One tool is a “machine language” stepper for the Apiary virtual machine, a parallel generalization of traditional machine language steppers such as the classic DDT for the PDP-10/20 machines. It is no2 intended to replace tools for debugging user programs, but rather to test whether the simulator works, test the output of higher level language compilers, and act as a debugger of last resort for particularly hard-to-find bugs. It gives a “worker’s-eye view” of the Apiary, using a separate Lisp Machine window to display the state of each worker. For tasks created by Scripter code, the stepper may also display the source code which corresponds to that event. The next illustration shows a simulated Apiary with two workers about to work on (PARALLEL-FACTORIAL 5). Only one worker is actually busy at the moment. \n ACTOR with . PARALLEL-FACTORIAL script and 0 acqualntsnccs being sent the nre-s xl the result will be sent to t5e customer Ln ACTOR with a CEFAULT-PRINT-XRIPT script and 0 acqualntanca, he Lpo- bass ,-eBWmE6 for 1000 J3 more erenta here are 1 to&n cm thla worker’s nucua ,I-ker I Strp forward Step back 5tart/stop running Run quiet1 /Stop Inspact * ou d caimulata) Worker-rep lor worker 2: rslmulatlng with ((PARALLEL-FACTORIAL 6)) I e* .smdlng A tr:nrmttrble obJcct for An tVENT t to my nelgbbw numbered 1 The first line in each worker window indicates the kind of event taking place -- usually a REQUEST, REPLY or COMPLAINT event. The next few lines contain descriptions of the event. A window at the bottom right displays the communications traffic between workers. Each object in the Apiary simulator accepts messages to produce an English-like description of itself for display in the window. The following illustration shows the FACTORIAL computation at a later stage. The program has broken up the task of computing factorial of 5 into the tasks of computing the product of numbers from 1 to 3 and the product of numbers fioln 4 to 5, in parallel. Next. we will decompose the product from 1 to 3 into multiplying 3 by the product of 1 to 2. The load balancing algorithms have spread work from one worker to another, so now both workers arc busy. We can step the whole Apiary or individual workers. one event at a time. Each worker keeps a history of its states, so that workers can be stepped backward as well ‘as forward. The Apiary can be run continuously, either with or without displaying events after each step. It is also useful to be able to specify a description of a certain event, and tell the stepper to run until an event satisfying the condition is encountered. 245 The target eotor An ACTOR with a RANGEPRODUCT script and 0 acqunlntrnso Is behg bent the mas8age (A DOIT (WITH LOW 1) (WITH H!GH 2)) and the result will be rsnt to the customer An ACTOR with a ,-ORWARIIINQ-SCE.PT script and 2 acqualntancer The sponrar has resources for 1005SO mars events Thla la e REPLY e”ent The value T Is be Irag retr:med to the ~“btomer An ACTOR with a FORWARO!‘:G-SCRIPT scrllt and 2 acquaintances The sponmr hee rehouroea fcr SSSSS more eventa There are 2 tada on thla vorhetis quwa I ‘lorkel- 2 This8 is e REPLY emmt The value An ACTOR ivlth . FUTURE serlqt and 1 acqualntancss Is being returned to the customer A CUSTOMER actor whose script Is RANGEPRODUCT-SCRIPT-154 The ~ponscw has resources for 98958 more events Thle Is a REQUEST event The target eator An ACTOR with a RANGEPRODUCT-SCRIPT-TBZ script and 2 acqualntsnces Is bolng sent the maesage T and the result will be cant to t/m oustomer A CUSTOMER actor whose script Ir RANGEPRODUCT-SCRIPT-167 The eponew hes reawrces for EB8dS more events There we 2 taska M this wwher’e pueus 1 “orkcr I @xp forward x Gtep back Start/Stop running Run quietly/Stop 11. Acknowledgments Major support for this research was provided by the System Development Foundation. Additional support was provided in part by ARPA under ONR contract N00014-80-C-0505. I would like to thank Carl Hewitt for originating the actor and Apiary concepts, and support in their implementation. I would like to thank Charles Smith of SDF for his help in obtaining support for this research. Jon Amsterdam, Dan Theriault, and Roy Nordblom contributed code to the Apiary simulator and Scripter language. I am grateful to Gene Ciccarelli, Al Davis, Peter deJong, Mike Farmwald, Peter Fiekowsky, Peter Hart, Kenneth Kahn, William Kornfeld, Carl Mikkelsen, Henry Sowizral, and Daniel Weld, for Apiary-related discussions. 1. Carl Hewitt. Viewing Control Structures As Patterns of Passing Messages. In R. Brown and P. H. Winston, Ed., Artificiul tntelligence, an MIT Perspective, MIT Press, 1979. 2. Carl Hewitt. The Apiary Network Architecture for Knowledgeable Systems. Proceedings of the First Lisp Conference, Stanford University, August, 1980. 3. Henry Lieberman. A Preview of Act 1. AI Memo 625, MIT Artificial Intelligence Laboratory, April, 1980. 4. Henry Lieberman. Thinking About Lots of Things At Once Without Getting Confused. Al Memo 626, MIT Artificial Intelligence Laboratory, April, 1980. 5. Henry Lieberman. Machine Tongues IX: Object Oriented Programming. Computer Music Journal 6,3 (Fall 1982). 6. Henry Lieberman and Carl Hewitt. A Real Time Garbage Collector Based on the Lifetimes of Objects. Communications of the ACM (June 1983). 246
1983
18
209
Knowledge-based Programming Using Abstract Qata Types’ Gordon S. Novak Jr.2 Heuristic Programming Project Computer Science Department Stanford University Stanford, CA 94305 1. Abstract Features of the GLISP programming system that support knowledge-based programming are described. These include compile-time expansion of object-centered programs, interpretation of messages and operations relative to data type, inheritance of properties and behavior from multiple superclasses, type inference and propagation, conditional compilation, symbolic optimization of compiled code, instantiation of generic programs for particular data types, combination of partial algorithms from separate sources, knowledge-based inspection and editing of data, menu-driven interactive programming, and transportability between Lisp dialects and machines. GLISP is fully implemented for the major dialects of Lisp and is available over the ARPANET. 2. Introduction A compiler can be viewed as a program that, using knowledge embodied as data and procedures, converts a specification of a program into an executable program. Compilers for traditional programming languages have embodied a relatively small amount of knowledge and have not been easily extensible by the user. Restrictions imposed by traditional languages, for example, that a calling program and subroutine must be written in terms of identical data types, have inhibited the accumulation of programming knowledge in the form of reusable programs. The power of a programming system can be measured by the leverage it provides, that is, by its ability to convert abbreviated specifications into sizable programs. To increase the power of compilers, it is necessary to increase the knowledge they contain and to make user-specified knowledge effective during the compilation process. This paper describes the knowledge used by the GLISP compiler and its associated programming systems, focusing on features that permit reusability of programs and accumulation of programming knowledge. The GLISP compiler provides a high degree of leverage in converting GLISP programs into efficient Lisp code. ‘This research was supported in part by NSF Grant SED-7912803 in the Joint National Science Foundation - National Institute of Education Program of Research on Cognitive Processes and the Structure of Knowledge in Science and Mathematics and in part by the Defense Advanced Research Projects Agency under Contract MDA-903-80-c-007. LAuthor’s present address: Computer Science Department, University of Texas at Austin, Austin, TX 78712. Phone (512) 471.4353. Net address CS.NOVAK@UTEXAS-20. 3. GLISP GLISP [5] [6] [7] is a high-level language that includes Lisp as a sublanguage and is compiled into Lisp. It provides a powerful abstract data-type mechanism that allows the structure and computed properties of objects to be described. Properties, predicate adjectives, and messages can be inherited from multiple superclasses. Compilation of properties is recursive at compile time and is performed relative to the types of the objects in question; this allows the same properties and behavio; to be inherited by objects that are represented differently. GLISP provides an -object-centered programming system that allows messages to be interpreted at run time. A major advantage of GLISP compared to other object-centered programming systems is that when the type of an object is known, the GLISP compiler can determine the appropriate response function for a message to the object at compile time and can compile a direct call to that function or macro-expand it in-line. This brovides the representational power of object-centered programming with no penalty in execution speed compared to ordinary Lisp. 4. Related Work The use of messages and computed properties in GLISP is related to the use of messages in object-centered programming (OCP) systems [I] [3] [4]; GLISP contains a system for interpretation of run-time messages to objects and thus supports OCP. However, OCP has several inherent problems. The first problem is that object-centered programs tend to be slow compared to programs written in the underlying language (typically 20 to 50 times slower). One reason for this slowness is that messages must be interpreted at run time; when the response to a message is a small amount of code (e.g., for data access), this overhead becomes a large fraction of execution time. Even with special hardware support, the overhead of message lookup is a significant cost. A potentially more serious performance problem is caused bv the referential opacity of OCP. Most program optimization; require some global knowledge about the program; that is, most optimizations are of the form “If both operations A and B are to be performed, there is a way to do A and B together that is cheaper than doing each separately.” For example, if one wishes to make a list of the female “A” students in a class, it may be cheaper to compute the set of students who are both female and “A” students than to compute the two sets separately and intersect them. In an OCP system in which the female students and “A” students were found by sending messages, however, it would not be possible to perform this optimization because it would not be known how the two sets were computed. 288 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Another problem caused by referential opacity is that error checking must be deferred to run time. If the response to a message is not looked up until run time, it will not be known until then whether the object can in fact respond to that message, whether the result objects it returns can respond to the messages that will be sent to them, and so forth; this complicates debugging. Borning and lngalls [2] have reported an experimental compile- time type-checking system for Smalltalk. By looking up message responses at compile time, GLISP eliminates the overhead of run-time lookup for most messages. Because messages can be expanded as in-line code, optimizations that span multiple messages (as in the example above) can be performed by the compiler. When object types are known, error checking is performed statically at compile time rather than dynamically. 5. Program Reusability Much of the work in traditional programming consists of specializing standard algorithms for particular uses. The specializations performed by the programmer include instantiation of algorithms for particular data representations, combination of separate algorithms into a composite algorithm, and optimization based on knowledge of the conditions of use of the algorithm. In this section, compiler features that are necessary for compilers to perform such specializations are discussed; these features make it possible for standard algorithms to be written as generic programs and reused for different applications. 5.1. Data-type Independence Most programming languages require that programs be written in terms of specific data types. Lisp is particularly bad in this regard because data access is performed by function calls; thus, dependencies on particular storage structures are built into program code. In GLISP, when a procedure is inherited as the response to a message or property reference, the static types of the actual arguments are substituted for the types of the formal arguments of the procedure; the inherited procedure can be specialized for the actual argument types either by open compilation (analogous to macro expansion) within the referencing program or by specializing it as a closed procedure. The GLISP object descriptions provide a level of indirection between the names of properties and the representation of the properties. Since substructures and computed properties are referenced in the same way in GLISP program code, a procedure can reference a property that is stored in one use and computed in another. Additional compiler features are needed to achieve data- representation independence. Programs often write data as well as reading it. GLISP can “invert” an algebraic expression that ultimately involves only a single occurrence of stored data; this allows a program to “store” into a computed property so long as that property has a single “equivalent” property that is stored. For example, given a CIRCLE object whose RADIUS is stored, it is legitimate to assign a value to the AREA of the CIRCLE; the compiler produces code to store a RADIUS value corresponding to the specified AREA value. Programs also create new data objects. Vector addition, for example, involves the creation of a new vector whose elements are formed by adding the components of the input vectors. To create a new vector that is like the input vectors, it is necessary to be able to specify a type that is the same as the type of another datum. GLISP provides a TYPEOF operator, which returns the compile-time type of the expression that is its argument. Thus, a generic vector-addition function can be used for different kinds of vectors: (VECTORPLUS (GLAMBDA (U,V:VECTOR) (A (TYPEOF U) WITH X q U:X + V:X Y = U:Y + V:Y))) A particular kind of vector, a FVECTOR, can be described as follows: (FVECTOR (CONS (Y BOOLEAN) (X STRING)) SUPERS (VECTOR)) Given an expression “F+G”, where F and G are FVECTORs, the compiler will produce the code: (CONS (OR (CAR F) (CAR G)) (CONCAT (CDR F) (CDR G))) The operator “+” is defined for VECTORS as the function VECTORPLUS; this definition is inherited by FVECTORs, so that the function VECTORPLUS is open-compiled with the type FVECTOR for its arguments. VECTORPLUS produces a new object whose type is the same as the type of its first argument; the “t” operators within VECTORPLUS are interpreted according to the types of the components of the FVECTOR type, so that the BOOLEAN components are ORed and the STRING components are concatenated. 5.2. Multiple Views of Data Real program data are often not of a single, simple type but may be viewed in several different ways. GLISP provides mechanisms whereby features of objects may be inherited from multiple views. The first such mechanism is inheritance from multiple superclasses. When a property is inherited from a superclass, it is compiled recursively in the context of the original object; the references made by the definition of the inherited property may then involve inheritance from different superclasses. This feature allows the user to have a number of shallow inheritance hierarchies rather than one deep hierarchy; the shallow hierarchies tend to be “cleaner” because each deals with only a limited facet of behavior. In some cases, it is desirable to view an object as an object of another type without actually materializing the data involved in the view type. GLISP provides a virtual view mechanism that permits such views. For example, in the GEV editor, an item of the displayed data is viewed as containing areas on the display screen (the area around the item’s name and the area around its displayed value). Using virtual views of the item, these areas are defined in terms of computed quantities (e.g., the number of characters in the item’s name determines the width of the name area); the procedure that tests whether a point is inside an area can then be inherited to test whether the mouse pointer is selecting the item. The code that is produced by the compiler is written in terms of the data that is actually stored, so that an “area” datum does not need to be constructed in order to use the inherited generic procedure. 5.3. Combination of Algorithms Real programs are typically composed of a number of smaller component algorithms that are combined and specialized for their particular use. For example, a number of iterative programs can be viewed as being composed of the following components: Iterator: Collection + Element* Filter: Element + Boolean Viewer: Element + View Collector: Initialize: nil + Aggregate Accumulate: Aggregate X View * Aggregate Report: Aggregate -+ Result The lterator enumerates the elements of the collection in temporal order; the Filter selects the elements to be processed; the Viewer views each element in the desired way; and the Collector collects the views of the element into some aggregate. For example, finding the average monthly salary of plumbers in a company might involve enumerating the employees of the company, selecting only the plumbers, viewing an employee record as “monthly salary”, and collecting the monthly salary data for the average. GLISP allows the operation of such an iterative program to be expressed as a single generic function; this function can then be instantiated for a given set of component functions to produce a single Lisp function that performs the desired task. Instantiation of generic functions is somewhat similar to instantiation of program plans in the Programmer’s Apprentice [8]; in GLISP, there is a single language for both generic and concrete programs, and instantiation occurs by recursive expansion of code at compile time. Such an approach allows programs to be constructed very quickly. The lterator for a collection is determined by the type of the collection, and the element type is likewise determined. A library of standard Collectors (average, sum, maximum, etc.) is easily assembled; each Collector constrains the type of View that it can take as input. The only remaining items necessary are the Filter and Viewer; these can easily be acquired by menu selection using knowledge of the element type (as is done in GEV, described below). 6. Compiler Knowledge The GLISP compiler runs within a variety of Lisp systems; it embodies knowledge about the underlying Lisp system that helps make GLISP programs transportable. Implementors of Lisp systems have unfortunately introduced many variations in the names, syntax, and semantics of even the basic system functions of Lisp. GLISP performs the mapping from operations on various data types into the corresponding function calls in the host Lisp system; it also defines basic data types with GLISP object descriptions, so that standard properties of these data types are available in a dialect-independent manner. The compiler performs type inference for Lisp system functions, so that the types of their results will be known without requiring explicit declarations. GLISP encourages the development of abstract data-type packages that mediate the interaction between user programs and idiosyncratic system features. For example, the GEV system uses Window/Menu data-type packages that allow the same code to run on Lisp machines with bitmap displays and on time-sharing systems with ordinary terminals. The compiler performs symbolic simplification of the Lisp code it generates; this improves efficiency and allows the user to use the representational power of GLISP without paying a run-time penalty. Particular attention is paid to optimization of set operations and loops over sets to avoid unnecessary construction of intermediate sets. Symbolic simplification also provides conditional compilation in a clean form. The user may declare to the compiler that certain data have values that are considered to be compile-time constants. Compile-time execution of operations on constants can produce constant values for conditional tests; symbolic simplification of the resulting code causes unreachable program code to vanish. For large programs with many options, such as symbolic algebra packages, elimination of code for unwanted options can provide substantial savings in program space and execution time without changes to the original source code. 7. GEV: Knowledge-based Data Inspection GEV (for GLISP Edit Value) is a program, written in GLISP, that interprets Lisp data according to its GLISP data-type descriptions and displays it in readable form in a window. The display contains three sections: the edit path that led to the current object, the data that are actually stored in the object, and computed properties of the object. Figure 7-1 shows an example. The user can “zoom in” on an item of interest, which will be displayed in greater detail according to its type description; this allows the user to browse quickly through a semantic network of related data. A data-type description can specify that certain computed properties should -lPP - HPP ZUNTRACTS - (Ad’+anced A, I , krchit- , , , > v4 - GLISP -EADER .-’ GSN B IRTHDATE - July 21, 1947 P H 0 N E - (415) 497-4532 QUIT PROP POP ADJ EDIT ISA PROGRAM MSG Figure 7- 1: A GEV window display. 290 be displayed automatically whenever an object of that type is displayed; other computed properties can be requested by menu selection. The SHORTVALUE property of an object is used to display the object when “seen from afar”; for example, the SHORTVALUE for a PERSON object could be defined to be the person’s initials. A tilde ( “a”) indicates that a SHORTVALUE is displayed rather than the actual Lisp value. GEV allows the user to write looping programs interactively by menu selection. When the “program” command is selected, GEV first displays a menu of operations that can be performed. Next, a menu of possible sets over which the program could iterate is displayed. Finally, successive menus are presented to allow the desired property of the object to be selected; this process terminates when the user gives a “done” command or when a terminal value is reached. From these selections, a GLISP program to perform the specified operations is written, compiled, and run; this process normally takes less than a second. The result is printed and added to the GEV window; Figure 7-2 illustrates this process. The “program” feature allows the user to write significant programs rapidly without knowing the format of lhe data and without knowing any programming language. Since GEV interprets data according to GLISP object descriptions, it can be used for inspection of any Lisp data for which such descriptions are supplied. HPP - HPP TITLE - H eu r i st i c P r r~ cj r $3 ro ra i rl g P r oj - ABBREVIATI- HPP ~DMIMISTRA- = TGR I;ilNTRAI:TS -. ( Al:lvanced A , I , Archi t- , , , ) EXECLIT IVES - (EAF L~F;I: 63~ TCR) BUUGET G59307.2 &!IYERAGE LB- 540861.23 AVERAGE BUDGET LABOR OF HPP CONTRACTS = 54000.28 Figure 7-2: Menu programming in GEV. 8. Summary GLISP is an integrated programming system that uses declarative knowledge of the implementations of objects t0 generate code for operations on the objects. Recursive compilation relative to object types provides code efficiency comparable to ordinary Lisp with the representational power of object-centered programming. The GEV system interprets GLlSP object descriptions to provide intelligent inspection and editing of data and menu-driven interactive program generation. GLISP and GEV are fully implemented and are being used by a number of university and industrial research labs for implementation of Al systems. 9. How to Obtain GLISP GLISP and GEV are avai!able without charge over the ARPANET. GLISP files are stored in the directory <GLISP> on the host computer SUMEX-AIM.3 At the time of writing, GLISP is available for lnterlisp, Maclisp, Franz Lisp, UCI Lisp, ELISP, and Portable Standard Lisp; Zetalisp and Common Lisp are planned. The manual is available as GLUSER. MSS (Scribe source form) and GLIJSER. LPT, and it tells how to obtain the files for the different Lisp dialects. The file GLISP. NEWS contains news on recent developments. 3 The login “anonymous guest” may be useJ for FTP transfers. References 1. Bobrow, D. G., and Stefik, M. The LOOPS Manual. Tech. Rept. KB-VLSI-81- 13, Xerox Palo Alto Research Center, 1981. 2. Borning, A., and Ingalls, D. A Type Declaration and Inference System for Smalltalk. Proc. 9th Conf. on Principles of Programming Languages, ACM, 1982. 3. Cannon, H. I. Flavors: A Non-Hierarchical Approach to Object- Oriented Programming. Tech. Rept. Working Paper, A.I. Lab, Massachusetts Institute of Technology, October, 1981. 4. Ingalls, D. The Smalltalk- Programming System: Design and Implementation. 5th ACM Symposium on Principles of Programming Languages, 1978. 5. Novak, G. S. GLISP Reference Manual. Tech. Rept. HPP-82-1, Heuristic Programming Project, Computer Science Dept., Stanford University, February, 1983. 6. Novak, G. S. GLISP: A High-Level Language for A.I. Programming. Proc. 2nd Natlorlal Conference on Artificial Intelligence, Carnegie-Mellon University, 1982. 7. Novak, G. S. “GLISP: A Lisp-based Programming System with Data Abstraction.” A./. Magazine 4, 3 (August 1983). 8. Waters, Richard C. “The Programmer’s Apprentice: Knowledge Based Program Editing.” /EEE Transactions on Software Engineering SE-8, 1 (January 1982).
1983
19
210
THE ADVANTAGES OF ABSTRACT COEJTROL KNOWLEDGE IN EXPERT SYSTEM DESlGN William J. Clancey Heuristic Programming Project Computer Science Department Stanford University Stanford, CA 94305 ABSTRACT A poorly dcsigncd knowledge base can be as cryptic as an arbitrary program and just as difficult to maintain. Rcprescnting control knowledge abstractly, scparatcly from domain facts and relations, makes the design more transparent and explainable. A body of abstract control knowledge provides a generic framework for constructing knowledge bases for rclatcd problems in other domains and also provides a useful starting point for studying the nature of strategies.* I INTRODUCTION The quality of a knowledge base depends not only on how well it :olves problems, but also how on easily its design allows it to be nGntaincd. Easy maintenance--the capability to reliably modify a knowledge base without extensive reprogramming--is important for several reasons: Y Knowledge-based programs arc built incrcmcntally, based on many trials, so modification is continually required, including updztcs based on improved expertise; e A knowledge base is a repository that other researchers and IISWS may wish to build ~1po11 years later; 3 A client rccciving a knowlcdgc base constructed for him may wish to correct and cxtcrld it without the assistance of the original designers. A knowlcdgc base is like a traditional program in that maintaining it rcquircs having a good undcraandin$ of the underlying &sign. ‘I’hat i:, yuu ncctl to kt~ow how the parts of the knowlcclgc base are expcctcd to interact in problcrn solving. IIcpcnding on the rcprcscntation, this includes knowing how dcthult and judgmcntnl knowlcdgc interact, \I I Icthcr rule cl~~uscs can bc reordered, when attnchcd proccdurcs are applied, how cclnstraints arc inherited and ordcrcd, etc. One way to provide Lhis ulldcrstanding is to have the program explain its reasoning, using an internal description of its own design (Davis, 1976). (Swartout, 1977). However, problems cncountcrcd in understanding traditional programs--poorly-structured code, implicit side-effects, and inadeqtlatc documentation--carry over to knowlcdgc-based programming and naturally limit the capabilities of explanation programs. For example, 3 knowledge base might arbitrarily combine reasoning strategies with facts about the domain. Implicit, procedurally-cmbcddcd knowlcdgc cannot bc articulated by an explanation system (Swartout, 1981), (Clnncry, 1983) and is not visible to guide the program maintainer (see (Ennis, 1982) for an cntcrtaining study of this problem). This paper argues that an iqm-~a~~t design priirciple f;)r building expert s~5terns is to rcpresetlt all corrlrol ktlorvledge abslrrrclly, seprmte from the domain knowledge il opuwes qon. This ide‘r is illustrated with cxamplcs from the NEOMYCIN system (Clanccy, 1981). There arc many scientific, cngincering, and practical bcncfits. The difficulty of attaining this ideal design is also considered. II WHAT IS ABSTRACT CONTROL KNOWLEDGE? “Control knowledge” specifics when and how a program is to carry out its operations, such as pursuing a goal, focusing, acquiring data, and making infcrcnccs. A basic distinction can be rnadc bctwccn the facts and relations of a knowlcdgc base and the program oper‘ltions that act upon it. For example, i%crs and relations in a medical knowledge base might include (expressed in a prcdicatc calculus formulation): (SUBT YPE INFECTION MENINGITIS) "meningitis is a kind of infection" (CAUSES INFECTION FEVER 1 -- I- infection causes fever" (CAUSES INFECTION SHAKING-CHI LLS) -- "infection causes shaki ng chills" (DI SORDER MENINGITIS) -- "meningitis is a cl isorder" (FINDlNG FEVER) -- "fever is a finding" Such a knowlcdgc base might be used to provide consultative advice ta n user, in a way typical of cxpcrt systems (I)uda and Shortliffe, 1983). Consider, for cxnmplc, a consultation spstcm for diagnosing some faulty dcvicc. One typicJ progr,rm operation is to sclcct ;I linding that causes a dlsordcr acid ask ~hc user to indicate whcthcr the device being di,lgnoc;cd exhibits that symptom. Spccifkally, a medical diagnostic systcrn lnight ask the user whcthcr the patient is suffering from rll,lking chills, in or&r to detcrnlinc whcthcr he has an infection. ‘l’hc first description of the progralrl’s opcr‘ltioti is dslrxl, rcfcrr-ing only to tlomain-il:dcpclidcllt rcla[ions like “finding” and “causes”: the second description is corlcre/e, rcfcrring to domain-dcpcnd~ltt tcrins like “shnking-cliills” and “infection”. (“l)olnain-indc(,clIdcllt” doesn’t mean that it npplics to cvcry domain, just IhaL tlic term is not specific to any wit dohlnin.) The operation dcscribcd hcrc can be charnctcri/.cd abstractly as “attempting to confirm a diagnostic hypothesis” or concrctcly as “attempting to dctcrminc whcthcr the patient has an infection.” Ilithcr description indicates the s/ru/egy that motivntcs the question the program is asking of the user. So in this cxamplc WC SW how a strntcgy, or control knowlcdgc, can bc St&cd cithcr abstractly or concretely. The following two cxamplcs illustr,ltc how both forms of control knowledge might bc rcprcscntcd in a knowlcdgc base. 74 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. A. An IrnsiJ~~t Refinement Stratesy In MYCIN (Shortliffc, 1976), most knowlcdgc is represented as domain-specific rules. For cxnmplc, the rule “If the patient has an infection and his CSF ccl1 count is less than 10, then it is unlikely that hc has meningitis,” might be rcprcscntcd as: PREMISE: ($AND (SAME CNTXT INFCCTION) (ILESSP (VALl CNTXT CSFCELLCOUNT) 10)) ACTION: (CONCLUDE CNTXT INFECTION-TYPE MENINGITIS TALLY -700) The order of clauses is important hcrc, for the program should not conGdcr the “CSI: ccl1 count” if the patient dots not have an infection. Such clause ordering in all rules cncurcx that the program proceeds by top-down rcfincmcnt from infection to meningitis to subtypes of meningitis. ‘I‘hc discasc hierarchy cannot bc stated explicitly in the MYCIN rule language: it is implicit in the design of the rules. (See (Clanccy, 1983) for fui thcr analysis of the limitations of MYCIN’s representation.) CENTAUR (Aikins, 1980) is a system in which discnsc hicrarchics arc explicit. In its rcprcscntation langungc, MYCIN’s meningitis kllowlcdgc niight bc encoded as follows (u&g a I,ISP property list notation): INFCCTION MORE-SPLCIFIC ((disease MCNINGITIS) (disease BACTtRCMIA)...) IF-CONFIRMED (DETERMINE disease of INFCCTION) ElFNINGITIS MORE-Sl'tCIFIC ((subtype BACTERIAL) (subtype VIRAL)...) IF-CONFfRMED (DETk!!MINE subtype of MCNINGITIS) III C EN’I’AUII, hierarchical rclat~ons among disorders arc explicit (meningitis is a specific kind of infection), and the stratcgics for using the knowledge XC domain-specific (after confirming that the patient 1~1s an infection, determine what more specific discasc he has). This design enables CXNTAUII to articulate its operations better than MYCIN, uhosc hierarchical relations and strategy arc procedurally cmbcddcd in rules. I-Towcvcr, obscrvc that each node of CEN’I’AUll’s hierarchy csscntially rcpcnts il Siilgk smkgy--try to confirm the prcscncc of a child disorder--and the overall strategy of top-down rclincmcnt is not explicit. Aikins has lcrbrlctl CHVI’AUI~‘s strategies, 5ut has not stated them abstractly. By rcprcscnting stratcgics abstractly, it is possible to have a more explicit and non-redundant design. This is what is done in NEOMYCIN. In NEOMYCIN domain relations and strategy arc represented srpara~~ly and strategy is rcprcscntcd abstractly. A typical rule that accomplishes. in part, the abstract task of attempting to confirm a diagnostic hypothesis and its subtypes is shown below. ~Domain Knotvledge> INFECTION CAUSAL-SUBTYPES (MENINGITIS BACTEREMIA . ..) MCNINGITIS CAUSAL-SUBTYPES (BACTERIAL VIRAL . ..) (Abstmct Control Knowledge> TASK: EXPLORE-AND-RFFINE ARGUMENT: CURRENT-HYPOTHESIS _Mirnllul EOOl IF the hypothesis being focused upon has a child that has not been pursued, THFN pursue that child. (IF (AND (CURRCNT-ARGUMFNT %CURFOCUS) (CHIlDOF $CURFOCUS .$CHIlD) (TIINOT (PURSUED $CHILD))) (NEXTACTION (PURSUE-HYPOTHESIS WIILD))) NI’OMYCIN LIZ, ‘I dclib~ratior:/action loop for deducing what it sllould Jo next. nlc~/t/;/~/~s, like the one shown above. rccommcnd what t;i<k +ould bc done IICYI, nhat domain rule applied, or what domain finding rcqt!cstcd frown the user (dct:lils arc given in (Clanccy, 1381) n11r1 (Clnnccy and Hock, 1982) atld arc not important here). ‘I’hc important thing to notice is that rhis mctarulc will be applied for refining any disorder, obviating the need to “compile” rcdundnntly into the dom,iin hierarchy of disorders 1701~ it should bc searched. When a new domain relation is declared (e.g., a new kind of infection is added to rhc hierarchy) the abstract control knowlcdgc will USC it approprintcly. That is, we ,sclJnmle out whar /he dormtitl knowledge is f7om how if sl~ould be used. Mct,lruIcs wcrc first introduced for IISC in cxpcrt systems by IXtvis (LLivis, 197G), but hc conccivcd of‘ them as being domain-specific. In t.h,\t form, principles arc cncodcd reduntl,lnll~, just like CENTAUR’s control knowlcdgc. F’or cxamplc, the principle of pInsuing common causes hcforc unusual L~IISCS appears as specific metarulcs for ordering the domain rules of c,lch disorder. The bcncfits of stating mctarules abstractly <lrc illuntratcd further by a second cxamplc. 6. An Implicit Question-Askinq Strategy Another reason for ordering clauses in a system like MYCIN is to prevent unncccssary requests for data. A finding might bc dcduccd or ruled out from other facts available to the program. For cxamplc, the rule “If the patient has undergone surgery and ncurosurgcry, then c,)nsidcr diplococcus as a cause of the meningitis” might bc reprcscntcd as follows. PREMISE: ($AND (SAME CNTXT SURGERY) (SAME CNTXT NIUROSURGERY)) ACTION: (CONCLUDE CNTXT COVERFOR DIPLOCOCCUS TALLY 400) We say that the surgery clause “screens” for the relevance of asking about neurosurgery. Obscrvc that ncithcr the relation between thcsc two findings (that neurosurgery is a /ylz ofsurgcry) nor the strategy of considering a general finding in order to rule out one of its suhtypcs is explicit. An altcrnativc way used in MYCIN for encoding this knowledge is to have a separate “screening” rule that at least makes clcnr that thcsc two findings are related: “If the patient has not undcrgonc surgery, then hc has not undergone neurosurgery.” PREMISE: (SAND (NOTSAMC CNTXT SURGERY)) ACTION: (CONCLUDE CNTXT NEUROSURGERY YES TALLY -1000) Such a rule obviates the need for a “surgery” clnusc in every rule that mentions neurosurgery, so this design is more clcgant and less prone to error. However, the question-ordcriiig str‘ltcgy and the abstract relation between the findings arc still not explicit. Conscqucntly, the program’s cxplanatiorl syslem cannot help d syhtcm maintzincl understand the underlying design. In NEOMYCIN, the nbovc rule is rcprcscntcd abstractly by a mctarulc for the task of finding out new data. 75 (Domain Kuowledge> (SUUSUMES SURGERY NFUROSURGERY) (SUBSUMES SURGERY CARDIACSURGERY) (Abstract Control Knowledge> 1ASK: FINUOUT ARGUMENl': DESIRED-FINDING MFTARUlf'OOZ IF the desired finding is a subtype of a class of findings and the class of findings is not present in this case THEN conclude that the desired finding is not present. (IF (AND (CURRENT-ARGUMENT $SUBTYPE) (SUBSUMES $CLASS $SUBTYPE) (THNOT (SAMEP CNTXT $CLASS))) (NEXTACTION (CONCLUDE CNTXr $SUUTYPE 'YFS TALLY -1000))) This metarulc is rally an absfract geizeralization of all screening rules. Factoring out the statcmcnt of rclntions among findings from how those relations are to bc used products an elegant and economical representation. Bcsidcs enabling more-dctailcd explanation, such a design makes the system easier to construct and more robust. Consider the multiple ways in which a single relation between findings can be used. If WC are told that the patient has neurosurgery, WC can use the subsumption link (or its inverse) to conclude that the patient has undergone surgery. Or if WC know that the patient has not undcrgonc any kind of surgery WC know about, we can USC the “closed world assumption” and conclude that the patient has not undergone surgery. These infcrenccs are controlled by abstract metarules in NEOMYCIN. The knowledge base is casicr to construct because the cxpcrt needn’t specify cvcry situation in which a given fact or relation should bc used. New facts and relations can be added in a simple way: the abstract mctarules explicitly state how the relations will bc used. The same generality makes the knowlcdgc base more robust. ‘1%~ system is capahic of making use of facts and relations for diffcrcnt purposes, pcrhnps in combinations that would be difficult to anticipate or enumerate. Ill STUDYING ABSTRACT STRATiX~~ AND STRUCTURAL RELATIONS In NEOMYCIN, domain findings and disorders are rclatcd in the way shown above, and there arc approximately 75 rnctarulcs thal constitute a proccdurc for doing diagnosis. Bcsidcs abstract domain relations, such as SUI>SUMES, Ni*X>hlYClN’s mctarulcs rcfercncc: o Knowlcdgc about mctantlcs and tasks: (static) the argument of ;I task, whcthcr mctaruics arc to be appiicd itcnrtivcly, the condition under which a task sho111d bc abol ted, (dyn‘unic) whcthcr a task complctcd succcssfuliy, whcthcr a mctarule succccdcd or failed, cr_c. o Domain probicm-solving history: the active hypotheses, whcthcr a hypothesis was pursued, cumulative bclicf for a hypothesis, rules using a. finding that are “in focus”, a strong competitor to a given hypothesis, etc. ‘I’hcsc conccptq form the vocabulary for a modci of diagnosis, the terms in which cxpcrt bchnvior is intcrprctcd and \triilCgiCS arc cxprcsscd. An uncxpcctcd cffcct is that thcrc is no more backward chaining at the domain lcvci. That is, the only reason MYCIi’I dots bnckw‘ud chaining during its diagnostic (hiL,tory and p!l)sical) phase is to accomplich top-down rcfincmcnt and to :~;)ply screening rlllcs. This is the ndcs. Thcrc arc two spccilic products: (I bo& of abs/roc/ confrol kn~,ln/l~& that can itself bc sttldicd, as well as applied in other problem domains, and n larlguage fhr represeuting knowledge about tlir~rders (in terms of causality, subtype, etc.). We call these abstract relations slructufal relations. Structural relations arc a tncans for indexing domain-specific knowlcdgc: They select hypotheses to focus upon, &dings to request, and domain infcrcnces that might be made. As such, structural relations constitute the organization, the access paths, by which strategies bring domain-spccilic knowlcdgc into play. For exnmplc, the metarulcs given above mention the CHlI,l>OF and SUl3SUMES relations. METARUI_E001 looks for I/P children of the current hypothesis in order to pursue them; MEl‘ARUL,F,002 looks for a more gerzeraljmhg in order to ask for it first. These relations constitute the language by which the primitive domain concepts (particular findings and disorder hypothcscs) are related in a network. Addirlg a new strategy oflm requires addhg a ?lew kind of structural relation to the network. For cxamplc, supposc WC desire to pursue common causes of a disorder bcforc serious, but unusual causes. WC must partition the causes of any disorder according to this distinction, adding new relations to our lnnguage--COMMON- CAUSES and SERIOUS-CAUSES. Similarly, 111~ applicability of a strategy depemh 011 the preseme of giveu sfructur.al relations iu flit rh~ain. For cxamplc, a strategy might give prefcrcncc to low-cost findings, but in a particular problem domain all findings might bc equally easy to attain. Or a given set of strntcgies might deal with how to search a deep hierarchy of disorders, but in a given domain the hierarchy might bc shallow, making the stratcgics inapplicabic. By stating strategies abstractly, we arc forced to crplicatc structural relations. On this basis WC can compare domains with rcspcct to the applicability of stratcgics, ~cfcrring to structural propcrtics of the starch space. Lenat has found a similar relationship bctwccn hcuristicq (stratcgics) ,Ind slots (structural relations) in his program for discovering new heuristics (I.cnat, 1982). In particular, the ability to reason about heuristics in 1IURlSKO dcpcnds on breaking down compicx conditions and actions into many smaller slots that the program can inspect and modify sclcctivcly. The same observation holds for domain concepts \+ host rcprcscntation is rcfincd by the synthesis of new slots (c.g., adding a PI<IM II-FACTORS slot to cvcry nurnbcr). ‘l‘hc prugrnm even reasons ~LJOLJ/ rrhtions by creating a new slot that coiiccts rclations among cntrics of an important slot. IV GIVEN THE i33lEFITS, CANj’l- t3E DONE? An initial rc;\ction might bc that for sonic domains thcrc arc no pa!tcrns for using knowicdgc--no abstract stratcgics--ail f‘tcts and relations arc inscpnrablc from how they will bc USC~. For cxampic, the proccdurc for confirming any given clic;ordcr (IIIOIT gcncrally, intcrprcting signals or config~uriii 1: sonic dcvicc) might bc complctcly situation-spccilic, so thcrc arc IIO gcncral principics to ‘lpply. ‘I’his would appcdr to bc an unusual kir~ti of domain. WC arc more familiar with problems in which simple principics can bc applied over and over again in many situations. Teaching and learning arc made incrcdibiy difficult if thcrc is no early-over of proccdurcs from one problem to anolhcr. I)omains with a strong pcrccptual component, such 3s signal irltcl.prctation, might bc like this. Pcrccptuai skills rely o11 pattern matching, rather than scicctivc, controlled analysis of data; they arc might bc poor candidates for rcprcscnting proccdurcs abstractly. We also know that in many domains, for cfficicncy at runtimc, procctlurcs have been compiled for solving routine problems. ‘I’hcsc proccdurcs arc written down in the Lmiliar “proccdurcs manuals” for organization manngcmcnt, cquipmcnt operation, configuration design, tlolll>leshooting, etc. It is important to rccogniirc that thcsc proccdurcs ‘UC based upon domain facts, constraints impoycd by causal, temporal, and spa&i1 intcr,tctions, problem-solving goals, abstract principles of dc:,ign, diagnosis, etc. Except whcrc a proccdurc is arbitrary, there must bc sonic unticrlying rationale for the selection and ordering of its steps. Knowing this rationnlc is certainly important for rclinbly modt fying the proccdurc; such proccdurcs arc often just prcparcd plans that an cxpcrt (or a user following a program’s advice) mdy need to adapt to unusual circumstances. At one lcvcl, the ration& can bc made explicit in terms of ,m abstract plan with its attendant domain structural relations; a rcdlllldant, compiled form can be used for cfficicnt routine problem solving. In theory, if the rationale for a proccdurc or prepared plan can be made explicit, a program can rccons[ruct the proccdurc from first priniiplcs. ‘I‘his ,Ipproach has two basic difficulties. First, the proccdurc might have been lcnrncd incrcmcntnlly from cast cxpcricncc. It simply handles problems well: tltcrc is no compiled-out theory that can bc articulated. ‘1’11~s problem arises particularly for skills in which behavior has been shaped over time, or for any problem in vb’hich the trace of “lcssonc” has been poorly rccordcd. The second difticirlty is that constructing 21 proccdurc from first principles can involve a great deal of scar&. Stcfik’s (Stcfik, 1980) multi-lcvclcd planning rcgimc ti)r constructing MOILiEN cxperimcnts testifies to the coinplcxity of the task and the limited capabilities of current programs. In contr,l\t, Fricdl‘md’s (Friedland. 1979) approach of constructing cxpcrilncnt plans from skeletal, &tract plans trades flexibility for cfficicncy and rcscmbLancc to human solutions. While skeletal plans mny somctimcs USC domain-specific terms. as prccompilcd abstract pr<lccdurcs they arc atlalogous to Nl:OMYCIN’s tasks. Importantly, th2 ~/liotrak~for rlrc nf~sl,.crcl pl0n ilsclf is not explicit in any of‘ 111cse plograt~ls. For cxamplc, NEOMYCIN’s mctarulcs for a given task might bc ordered by prcfcrcncc (alternative methods to L~ccoinplish the sdmc operation) or as steps in a procedure. Since the constraints tllilt suggest the given ordering are not explicit. part of the d<:sign of the program is still not explicit. For example, the abstract stays of top-down rcfincmcnt arc now stated, but the scnsc in which they constitute this proccdurc is not rcp[cscntcd. (Why should pursuing siblings of ;I hypothcyis hc done before pursuing children?) As another example, the task of “establi4ling the hypothesis space” by expanding the set of possibilities beyond common. cxpcctcd causes and then narrowing down in a refincmcnt phase has mathematical, sct- theoretic underpinnings that arc not explicit in the program. Similarly, Stefik’s abstract planning procedure of “lcast-commitment” is implicit in numeric priorities assigned to plan design operators (Clanccy, 1983). Automatically constructing proccdurcs at this high lcvcl of abstraction, as opposed to implicitly building them into a program, has been explored very little. Even within the practical bounds of what we make explicit, it might be argued that representing procedures abstractly is much more difficult than stating individual situation-specific rules. This might differ from person to person; certainly in medicine some physicians are better than others at stating how they reason abstractly. A good heuristic might be to work with good tcachcrs, for they arc most likely to have extracted the principles so they can bc taught to students. There is certainly an initial cost whose bcncfit is unlikely to be rcnlizcd if no explanation facility is desired, only the original dcsigncrs maintain or modify the knowledge base, or there is no desire to build a generic system. But even this nrgumcnt is dubitable: a knowledge base with cmbcddcd strategies can appear cryptic to even the original designers after it has been left aside for a few months. Also, anyone intending to build more than one system will bcncfit from expressing knowledge as gcncrally as possible so that lessons about structure and strategy can speed up the building of new systems. The cost aside. it appears that thcrc is no way to get strategic explanations without making dom,~in relations explicit and stating strategic:; scpnratcly. This was the col,clusion of S;vartnut, who was led to concliidc that an automatic progr-nmmln g approach, as difficult as it first sccmcd. was a nntur,ll, direct way to cnsurc that the program had kllowlcdgc of its own design (Swartout, 1981). That is, providing complctc explanations means undcrstnnrling the design well enough to dcrivc the proccdurcs yourself. NEOMYCIN’S factoring of ktlowlcdgc into domain and strategic knowlcdgc baccs is comparable to the input rcquircmcnts of Swartout’s automatic programming s) stem. I lowc\cr, Nl OMYCIN intcrprcts its domain knowlcdgc, rather than instLmti,lting its abstract strategies in a compiled program. (Maintaillinq the ccparation is important so the tnctarulcs can bc used in student modcling (I ondon and Clanccy, 1982).) Morcovcr, NEOMYCIN’s str‘ltcgics arc abstract, unlike the domain-cyccific “principlcc” used in Swnrtout‘s program. ‘I‘his dasign decision was originally motiv,ltcd by our dcsirc to rcplicntc the kir;d or cxpl,mations given by tc~licrs (I Iasling, 1953). Howcvcr, WC now rcali/c th,lt rcprcscnting control knowl&c abstractly has cnginccring and scientific bcncfits as well. V ADVANTAGES OF THF APPROACH -- -~-- ‘l’h~ atlvalltagcs of rcprcscnting control knowlcdgc abstractly Can bc SUnllll~rri/Cd X~(;rdillg to cnginccl~in_r, SCiClltific, and practic<il bcnctits: o The explicit design is casicr to debug and modify. Hierarchical relations among findings and hypothcscs and search strategies are no longer procedurally embedded in rules. o Knowlcdgc is represcntcd more generally, so we get more pcrformancc from less system-building effort. WC don’t need to specify every situation in which a given fact should bc used. o The body of abstract control knowledge can bc applied to other problems, constituting the basis of a generic system, for example, a tool for building consultation programs that do diagnosis. 8 Science. Factoring out control knowlcdgc from domain knowledge provides a basis for studying the nature of stratcgics. Patterns become clear, revealing, for example, the underlying structural bases for backward chaining. Comparisons between domains can be made according to whether a given relation exists or a strategy can be applied. 8 Practice. o A considerable savings in storage is achieved if abstract strategies arc available for solving problems. Domain-specific proccdurcs for dealing with all possible situations needn’t be compiled in advance. o Explanations can bc more detailed, down to the level of abstract relations and stratcgics, so the program can be evaluated more thoroughly and used more responsibly. o Because strategies arc stated abstractly, the program can rccognizc the application of d pariicular strategy in different situations. This provides a basis for expL!nation by analogy, as well as recognizing plans during knowledge acquisition or student modclling. Rcprcscnting cuntrol knowlcdgc abstractly n~ovcs us closer to our idcal of specifying to a program Wl l Al’ problem to solve versus HOW to solve the problem (Fcigcnh,~um. 1977). Constructing a knowlcdgc base bccomcs a matter of declaring knowledge relations. 1 IOW the knowledge will bc used needn’t be simultaneously and rcdundnntly specified. An analogy can bc mndc with GUIDON (Clanccy, 1379) (Clnnccy, 1982), whose body of abstract teaching rules m&c the program us,~ble with multiple domains. Traditional CA1 programs arc specific to particular problems (not just problem domains) and have both subject matter cxpcrtisc and teaching strategies cmbcddcd within them. The scpnration of thcsc in GUIDON, and now the abstract rcprescntation of stratcgics in NEOMYCIN, is part of the logical progression of expert systcrns research that began with separation of the interpreter from the knowledge base in MYCIN. The trend throughout has been to state domain-specific knowledge more declaratively and to generalize the proccdurcs that control its application. Another analogy can be made with database systems that combine relational networks with logic programming (e.g., see (Nicolas, 1977)). To conserve space, it is not practical to explicitly store every relation among entities in a database. For example, a database about a population of a country might record just the parents of each person (e.g., (MOTHEROF $CHILD $MOTHER) and (FATHEROF $CI-IILD $FATHER)). A separate body of general derivafion uxiotns is used to retrieve other relations (the iniensional da&base). For example, siblings can be computed by the rule: (IF (AND (PERSON $PERSON) (MOTHEROF $PERSON $MOTHER) (PERSON $PERSONZ) (MOTHEROF $PERSONZ $MOTf.fER)) (SIBLING $PERSON $PERSONZ)) Such a rule is quite similar to the abstract mctarules that NEOMYCIN uses for deducing the prcscncc or abscncc of findings. IGEOM YCIN differs from database systems in that its rules are grouped and controlled to accomplish abstract tasks. Only a few of NEOM YClN’s metarulcs make infcrcnccs about database relations; most invoke other tasks, such as “ask a general question” and “group and differentiate hypotheses.” Morco\cr, the knowledge base contains judgmental rules of cvidcncc for the disorder hypothcscs. These diffcrcnccs aside, the analogy is stimulating. It suggests that treating a knowlcdgc base as an object to bc inspcctcd, rcasoncd about, and Inanipulatcd by abslract pt.occdure.s--ns a database is checked for integrity, qucricd, and cxtcndcd by general axioms--is a powerful design principle for building expert systems. References hikins J. S. Represenlation of control knowledge in expert Proceedings of the FM AAAI, pages 121-123,198O. systems, in Clancey, W. J. Tutoring rules for guiding a case method dialogue. The Internalional Journal of Man-Machine Studies, 1979, I I, 25-49. (Also in Slccman and Brown (editors), In/elligenl Turoring Systems, Academic Press, 1982). Clancey, W. J. and Lctsingcr, I<. NI!‘OM YCIN: Recot$grtring a rule- based expert syslettz for application to leaching, in Proceedings of the Sevenlh IJCAI, pages 829-836, 1981. (Rcviscd version to appear in Clanccy and Shortliffc (editors), Readings in medical ur@ial intelligence: Thefirst decade, Addison-Wesley, 1983). Clanccy, W. J. GUIDON. In Barr and Fcigcnbaum (editors), The Ilandbook of ArliJicial Inielligence, chapter Applications-oriented AI research: Education. William Kaufmann, Inc., Los Altos, 1982. (Revised version to appear in the Journal of Compufcr Based Inslruclion, 1983). Clancey, W. J. The epistemology of a rule-based expert system: A framework for explanation. Arlificial Intelligence, 1983, 20(j), 215-251. (Also to appear in Buchanan and Sholtliffe (editors), Rule-based expert sysletns: The MYCIN expetitttenls of he Sratford Heuristic Progratntning Project, Addison-Wcslcy, 1983). CXtlnccy, W. J. and Bock, C. hIR,‘;/NI<.‘OMYCIN: Representing melacottlrol in predicnle calculus. HPP Memo 82-31, Stanford University, Novcmbcr 1982. Davis R. Applications of tneta-level knowledge lo lhc cottslruciiott, tttaintcnance, and use of large knowledge bases. HIV Memo 70-7 and AI Memo 283, Stanford University, July 1976. Duda, R. 0. and Shortliffc, E. H. Expert systems research. Scietrce, 1983, 220,261-268. Ennis, S. P. Expert sysletns: A user’s persprcrive of sonte currenl lools, Proceedings of /he Second AAAI, pages 319-321, August, 1982. in Fcigcnbaum, E. A. The art of artt’jicial itt/elligettcc: I. Themes and case sludies of knowledge engineering, in Procecrlitlgr of rite Fifih IJCAI, pages 1014-1029, August, 1977. Fricdlnnd, I’. Knowledge-based experitttent design in molecular genetics, in Procrcdittgs ofcthe Six01 IJTA I, pages 255-257, 1979. Hasling, r). W. Abhtrnct cxplnnations of strarcgy in a didgnostic . consultation system. (‘I’0 appear in the 1’rocc:~tlilrg.s (I/‘/l ,I A I-K?). Lcnat, D. B. ‘1%~ nature of heuristics. Artificial In/elligence, 1952, 19(2), 189-249. London, B. and Clanccy, W. J. Plan recognilion strafegies in student modeling: prediclion and descriplion, in Proceedings of the Second AAA I, pages 335-338,1982. Nicolas, J. M. and Gallairc, H. Data base: Theory vs. interpretation. In H. Gallairc and J. Minkcr (editors), I.ogic and data bases, pages 33-54. Plenum Press, New York, 1977. Shortliffc, E. M. Computer-based medical consullations: MYC’IN. New York: Elscvier 1976. Stefik, M. J. Planning wilh constrainfs. PhD thesis, Computer Scicncc Dcpartmcnt, Stanford University, 1980. Swartout, W. I<. A digitalis lherapy advisor G/h explanafions. Technical report 176, MIT Laboratory for Computer Scicncc, I%bruary 1977. Swartout W. R. Explaining and juslifiing itr experr consullittg programs, in Proceeditlgs of /he Sevetrlh IJCAI, August, 1981. (Also to appear in Clancey and Shortliffc (editors), Retufittgs in ttlcdical artu?ci(rl itr/elligence: Thr-jTrst rlrcuJc. Addison-Wesley, 1983). 78
1983
2
211
AN OVERVIEW OF THE PENMAS TEXT GENERATION SYSTEM William C. Mann USC Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90291 ABSTRACT The problem of programming computers to produce natural language explanations and other texts on demand is an active research area in artificial intelligence. In the past, research systems designed for this purpose have been limited by the weakness of their linguistic bases, especially their grammars, and their techniques often cannot be transferred to new knowledge domains. A new text generation system, Penman, is designed to overcome these problems and produce fluent multiparagraph text in English in response to a goal presented to the system. Penman consists of four major modules: a knowledae acauisition module which can perform domain-specific searches for knowledge relevant to a given communication goal; a text planninq module which can organize the relevant information, decide what portion to present. and decide how to lead the reader’s attention and knowledge through the content; a sentence generation module based on a large systemic grammar of English; and an evaluatron and plan-oerturbation module which revises text plans based on evaluation of text produced. Development of Penman has included implementation of the largest systemic grammar of English in a single notation. A new semantic notation has been added to the systemic framework, and the semantics of nearly the entire grammar has been defined. The semantics is designed to be independent of the system’s knowledge notation, so that it is usable with widely differing knowledge representations, including both frame-based and predicate-calculus-based approaches. 1. TEXT GENERATION AS A PROBLEM Al research in text generation has a long history, but it has had a much lower level of activity than research in language comprehension. It has recently become clear that text generation capabilities (far beyond what can be done with canned text) will be needed, because Al systems of the future will have to justify their actions to users. Text generation capabilities are also being developed as parts of Instruction systems [Swartout 83a]. data base systems [McKeown 821, program specification systems This research was supported by the Air Force Cffrce of Screntific Research contract No F49620-79-C-0181. The views and conclusrons contained In this document are those of the author and should not be interpreted as necessarily representrng the officral poircies or endorsements. erther expressed or ImplIed, of the Arr Force Office of Screntifrc Research of the U.S Government [Swartout 82. Swartout 83b]. expert consulting systems [Swat-tout 811 and others. A group who recently assessed the state of the art in text generation ( [Mann 81a]) concluded that there are four critical technologies which will largely determine the pace of text generation progress in this decade: 1. Knowledge Representation 2. Linguistically Justified Grammars 3. Models of Text Readers 4. Models of Structures and Functions in Discourse The Penman system has distinct contributrng particularly to 2 and 4. roles for each of these. Penman is intended as a portable, reusable text generation facility which can be embedded in many kinds of systems By design. it is not tied to a single knowledge domain, to avoid the potential waste of effort inherent in developing single-domain systems whose domain-independent. reusable specific knowledge is not retained. Penman’s techniques are adequate to cover the data base domain of McKeown’s text generator [McKeown 821. Davey’s game transcripts domain [Davey 791. the crisis instructional domain of Mann and Moore [Moore & Mann 791 and others. 2. SYSTEh’I OVERVIEW Figure 2-l shows the principal data flows in Penman. The given goal controls both the search for relevant information (Acquisition) and the organization of that information (Text Planning). Plans are hierarchic, with plans for clauses or sentences at the finest level of detail. These plans include both the logical content to be expressed and how each unit leads the reader’s attention through the material. The sentence generator module (Sentence Generation) executes the most detailed level of the plan, thus producing a draft text. The evaluation and revision module (Improvement) evaluates the text, applying measures of quality and comparing the text with the plan for producing it. The module then produces perturbations in the plan to attempt to improve the text. A text is complete when the perturbations suggested by Improvement do not improve the value level identified by the Improvement module. The major knowledge resources of these modules are also indicated In the figure. In addition to the knowledge notation itself, there is a knowledge base for generic and concrete knowledge of the subject matter and its relation to the world in 261 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. general, a model of discourse, represented as a collection of patterns and rules which guide text planning. and a model of the reader. We describe the modules (in order of degree of development rather than in the data flow order described above) in the topical sections below. Glwn Goal > > Figure 2- 1: Major data flow paths in Penman PrOC.SSIC Knowledge Resources 3. SENTENCE GENERATION MODULE The most obvious weakness of text generation systems has been their weak and ad hoc treatment of grammar [Mann 81b]. The one notable exception, Davey’s Proteus, produced text in 1973 which remains unsurpassed [Davey 791. Penman’s grammar is called Nigel. named after the child learning English in Halliday’s well-known studies [Halliday 751.’ Nigel is a systemic grammar, like the one in Winograd’s SHRDLU, but far more extensive and with an entirely different semantics. Systemic linguistics has greatly influenced many text generation efforts. justifiably, since it is admirably suited for work whose concerns extend beyond the sentence and its structure. The natural unit of size in a systemic grammar is not a production rule, since systemic grammar does not use production rules. Instead, the unit is the system. A system is a collection of alternatives called grammatical features among which one must be chosen if the system is entered. The grammatical features are not like the category labels found in more conventional grammars. They are not non-terminal symbols representing phrases, and systems do not have to specify the order of constituents in order to have effects, The grammar is a network of systems. Each system has an entry condition. which is a boolean expression of grammatical features. The entry condition specifies whether the system will be entered, based entirely on choices of grammatical features in other systems. All of the optionality and control of the grammar’s generation is in the choices of grammatical features: there is no other kind of optionality or variability. Each pass through the grammar produces a collection of features as its result. There IS a separate specification of how syntactic structures are constructed in response to features. This specification, called the realization component of the grammar, tells how each feature corresponds to a set of operations on a structure-building space which will contain the final result. Constituent order is specified here rather than in the systems. A collection of chosen grammatical features determines the structure of a syntactic unit. such as a sentence or prepositional phrase. Systemic notation makes it easy to build up such a set of features in parts out of separately developed sub-collections of features. This fact, that syntactic units can be specified cumulatively rather than category differentiation. turns out to be crucial to the success of the whole framework. Systemic grammars develop their units, especially at the clause and clause-combination levels. by combining several independenf lines of developmenl, corresponding to several kinds of functional reasoning. M. A. K. Halliday. the founder of systemic linguistics, divides the functions of language, i.e., all of its controlled effects, into three metafunctions. which are collections of relatively closely related functions: 1. Ideationa/: These functions are concerned with the logical and experiential content of language. 2. Interpersona/: These functions are concerned with the stance which the speaker (writer) takes relative to the hearer (reader) and the ideational content. It includes the usual range of speech act theory, but also the speaker’s attitudes and representations of his status. 3. Textual: These functions are concerned with the smooth flow, emphasis, ease of comprehension of running text; they become particularly important beyond the level of single sentences. In Nigel we have extended the notation for choices by associating with each system a choice expert, an explicit process which is able to decide which choice of a feature is correct for any particular set of circumstances (knowledge and text plan). All variability (except word selection) is in the chorces, so these choice experts completely determine the resulting language structures. Choice experts are defined in a notation which makes them independent of the prevailing knowledge representation. This independence is achieved by putting a trght interface around the choice experts. requiring them to obtain all of the information about circumstances by asking questions at the interface, never by searching it out for themselves. The outbound symbolic expressions at the interface are called inquiries, and the semantic approach is called inquiry semantics. Inquiry responses are atoms, not structures or pointers2 The portion of the system outside of the interface is collectively called the environment. ‘We gratefully acknowledge the past work and present participation of Michael A. K. Halliday. without whom this work would not be possible, as well as the other systemic linguistics who have developed the framework and especially the systemic accounts of English. 2 Inquiry semantics has special space prevents showing them. methods for with the lexicon, lack of An Example of the Use of Inquiry Semantics presents the inquiry (PossessorModlD THING). This IS a different sort of inquiry, askmg for an arbitrary symbol to represent a locus of knowledge tn the environment. (It need not be a symbol actually In use m the envrronment. The grammar will only use the symbol In formmg further inquiries.) As an example, consider the inquiry and response actrvlty at the interface for generatmg a determiner. In this example, we will treat all of the choosers as an undifferentiated source of inquiries. The example will show how Nrgel can obtain the relevant information from its environment without knowing the knowledge notatron of the environment. The example IS appropriate for selectmg the determiner “her” in the sentence “She cancelled her appointment.” The example IS part of generating a phrase which refers to the appointment. The environment responds with the atom ‘APPLICANT,’ which the grammar associates with the grammatical function DEICTIC in a symbol table it is keeping, the same one which had an association for THING. (We will be assuming below that the table also has associations for SPEAKER and HEARER.) One of the choosers presents the inquiry (IdentifiabilityQ THING) at the interface, relying on a previously establrshed association of an environment symbol, APPT, with THING. This inquiry says, In effect, The grammar then starts to ask questions about the possessor. It presents the inquiry (QuesttonVariableQ DEICTIC), which is a question about APPLICANT, expressible as follows: Does APPT (THING) represent a concept which the speaker expects the hearer to find novel, not previously mentioned or evoked, and thus does not expect the hearer to identify uniquely by reason of the attention which it currently holds, its inherent uniqueness in culture or its association with an identifiable entity?3 Is APPLICANT (DEICTIC) a variable which represents the unspecified portron of a predication? The environment responds with the atom ‘nonvanable’. Thus rules out determmers such as “whose,” as In “You gave hrm whose money?” Notice that the environment, in this case the part of the system which maintains a model of the hearer, must make an estimate about the nearer’s state. The basis for requestmg thts particular estimate is in lmgurstlc studies of the function of determmers. A chooser then expressible as presents the inqurry (MultrplrcityQ DEICTIC), Is APPLICANT (DEICTIC) inherently multiple, i.e., a set or collection of things, or unitary?) The environment’s response is the atom ‘identifiable’. This establishes that the phrase being developed is expressing something definite which the reader IS expected to be able to identify, or possibly create, In his knowledge. A definite determiner will signal this fact to the reader. The environment rules out “their.” responds with the atom ‘unrtaty’. This The next mqurry IS (MemberSetQ which can be expressed as SPEAKER DEICTIC), After some processing, another chooser asks: Is SELF (SPEAKER) the APPLICANT (DEICTIC)? same as or included Is there a specification APPT (THING)? of proximity within in The response is meaning of “our.” ‘notmcluded.’ ruling out “my” and one The environment’s response IS the atom ‘noproximity’. At this point, determiners such as “this” and “those,” which compete with the possessive determmers, have been ruled out. The next inquiry IS (MemberSetQ HEARER DEICTIC), again with response ‘notincluded,’ this time ruling out “your.” The chooser then presents the inquiry (PossessorModiflcatlonQ THING), which can be expressed as The next inquiry expressed as is (KnownGenderQ DEICTIC). which can be Is there a specification APPT (THING)? of possessor Is the gender, masculme feminine or APPLICANT (DEICTIC) known? neutral. of The environment responds with the atom ‘possessor’. This leads to reserving the determiner slot for some possessive determiner such as “their” or “her,” provisionally ruling out the default definite determiner “the.” The response is ‘known’. This finally rules out “the,” because the possessor can defmitely be expressed in the determiner, so no other expression (such as a prepositional phrase) is needed to express it. Having discovered that there is a possessor. it is safe for grammar to try to evoke a symbol for the possessor. the Having established that gender is known, the grammar can ask its value with (GenderQ DEICTIC), the response being ‘female.’ The grammar selects “her” as the lexical item for the determiner. This selection is the first use of a lexical item in this account. 3 inquiries in Nigel are maintained in an English form as well as a formal form. In these examples, the grammatical function symbol (such as THING) is shown in parentheses, and the corresponding conceptual symbol from the environment (such as APPT) IS shown to the left of the parentheses. The knowledge representation side of the interface must implement the domain, vrelevant subset of the possible inquiries. 263 The only symbols of the grammar which enter into this implementation are the inquiry operators and the atoms such as ‘identifiable’ which represent closed-set responses. Modifying the knowledge representation may make it necessary to modify the inquiry operator implementations. but will never make it necessary to change the grammar or its semantics. The implementation task is represented in [Mann & Swat-tout 831, and the semantics is described in more detail in [Mann 83, Mann 821. Given the collection of inquiries which the grammar can ask, it is possible to give a precise answer to the question “What knowledge can be expressed grammatically in English?” without presuming a particular knowledge representation or logical formalism. Nigel is the largest functional grammar of English in a single notation. At the beginning of 1983 it had over 200 systems, each system being roughly comparable to one or a few production rules in other formalisms. There are some gaps in its syntactic capabilities, but nonetheless Nigel is adequate for many text generation tasks. Its diversity can be judged in part by examples: all of the sentence and clause structures of section 4.2 are within Nigel’s syntactic range. Nigel is programmed In Interlisp. The inquiry semantics of Nigel is only partly defined at this writing, but we expect that all systems will have choosers before August 1983. The resulting text still contains recognizable recurrent configurations, but--in contrast to sentence structure--these recurrent configurations are not so much recognized by coocurrence relations among the elements. To a much greater degree. the patterns are functional, and are recognized by recognizing joint contribution to a communicative purpose. The patternfulness arises out of reoccurrence of generic purposes, together with recurring selection of ways to satisfy those purposes in communication. Of course. patterns of desired effects reoccur over long periods of time, and so patterns in text also reoccur. Traditions and habits can be based on these reoccurrences, eventually becoming conventional patterns in language. These conventional patterns never form an adequate (or even suitable) basis for planning text, because as the text pattern becomes a fixed structure it becomes separated from the goals which motivated use of its parts. (So. for example, religious blessings turn Into “Adios” and “Goodbye.” losing their function as blessings.) While it may eventually become necessary to incorporate fixed patterns of text arrangement in the design of a text generator, today’s technology is not compromised by ignoring them. Instead, the technology can be based on direct original reasoning about how purposes may be satisfied by performing some of the available communicative acts. 4. TEXT PLANNING MODULE 4.2. A Model of the Reader The text planning module organizes the relevant information into a pattern suitable for presentation. Its operation is based mainly on two regions of Penman’s memory: a stored theory of discourse and a model of the reader. To communicate effectively is to cause conformity of the receiver’s state to a description of the desired state. In Penman the governing state descriptions refer to the end state of the reader, the state reached just after the text has been read. (In other kinds of communication, such as entertainment, the 4.1. A Theory of Discourse transient states may be more important.) So, for example, we may bring the reader into a state of being able to identify prime Penman’s text planning is based on a theory of discourse which regards the text as being organized into regions which act on the reader’s state in predictable ways. Regions are composed of other regions, recursively down to the clause level. The composition patterns are stored for the use of the text planner, which is a successive-refinement planner based on Sacerdoti’s [Sacerdoti 771. The planner itself is well-precedented, but this theory and method of organizing the text are new. The composition patterns roughly resemble McKeown’s rhetorical schemas [McKeown 821 and are related to Grimes’ rhetorical relations [Grimes 751. numbers, or of knowing the name of the king of France. (Entertaining text might be designed to produce a continuous state of amusement.) We can describe almost the entire reader’s state in terms of the elements of knowledge which he holds. In the usual case. the reader’s collective knowledge before and after the text is read is a subset of the knowledge which the system holds. We can think of Penman’s model of the reader conveniently as a set of independent “colorings” on the knowledge of the system. If we use Red to represent the knowledge of the reader before the text is presented, and we use Green to represent the knowledge he should hold after the text has been read, with other colors which represent intermediate states reached while reading the text, we get a visual analogue of Penman’s technique. We recognize that this technique is not adequate for some problems, but it is adequate for many applications. Future systems need to deal with conflict of belief, mutual knowledge and other communication phenomena which the coloring model does not cover [Moore 80. Cohen 771. Each composition pattern can be seen as an action having preconditions, one or more methods for performing the action, and a set of effects. The entire text is an action. and its parts combine just as actions do into larger actions. Each action is taken because it tends to satisfy a particular set of goals. Decisions on what to include, what sequence of presentation to use, and how to lead the reader are based on the effects (on the reader’s state) which the system expects, based on its knowledge of the composition patterns and the reader’s state. The Text Planning Module has not been implemented. The key to implementation is completion of the theory of discourse described in section 4.1. which has been developed extensively but is still in a precomputational stage. 5. ACQUISITIOIX MODULE Acquisition of the initial stock of information potentially to be expressed is a very domain dependent process: we expect to reimplement it for each application of Penman. Although some selectivity of search can be derived from the given goal. the techniques seem to always be very specific to the application. On the positive side, information acquisition is relatively easy if the knowledge representation in use represents all of the important kinds of knowledge of the host system. The Acquisition Module of Penman has not yet been implemented; experimentation is in a stage which is not committed to a particular expressive task. 6. IMPROVEMENT MODULE It is surprising how much progress has been made in text generation based on generators which do no more than produce “first draft” text. Neither Davey’s generator nor McKeown’s attempts to rework the text after generation. The KDS system [Mann & Moore 811 seems to be the only one which has relied heavily on text evaluation and hill-climbing to improve on the quality of the text, using methods which do not require the generator to anticipate the quality of the resulting text. Penman does not try to anticipate the major determinants of readability in the text it is producing. Sentence length, levels of clause embedding and the like are difficult to anticipate but trivral to measure after the text has been generated. Very simple measures of text quality, including these and also some comparative measures (to see whether the intended content was delivered) seem to be quite adequate as a basis for suggesting helpful revisions in text plans. The Improvement Module has not been implemented; its design includes particular critic processes, repair proposing processes and repair introduction processes. 7. SUMMARl The Penman text generation system is designed to be a high quality, portable, multi-domain. multi-representation embeddable module for text generation. It extends the systemic framework, providing a new semantic boundary for grammar. and makes text generation independent of knowledge notations in a new way. Viewed relative to the four critical technologies for text generation research. Penman contributes principally to the form and content of linguistically justified grammars and to models of discourse. REFERENCES [Cohen 771 Cohen, P. R., and C. R. Perrault. “Overview of ‘planning speech acts’,” in Proceedings of the Fifth International Joint Conference on Artificial Intelligence, Massachusetts Institute of Technology, August 1977. [Davey 791 Davey, A., Discourse Press, Edinburgh, 1979. Production, Edinburgh University [Grimes 751 Grimes, Hague, 1975. J. E., The Thread of Discourse, Mouton, The [Halliday 751 Halliday. Arnold, 1975. M. A. K. , Learning How to Mean, [Mann 81a] Mann, William C., et al., Text Generation: The State of the Art and the Literature, USC/Information Sciences Institute, RR-81 -101, December 1981. Appeared as Text Generafion in April-June 1982 AJCL [Mann 81 b] Mann. W. C.. “Two discourse generators.” in The Nineteenth Annual Meeting of the Associatron ior Computational Linguistics, Sperry Untvac. 1981. [Mann 821 Mann. W. C., The Anatomy of a System/c Choice. USC/Information Sciences Institute. Marina del Rey, CA. RR-82- 104. October 1982. [Mann 831 Mann. W. C., and C. M. I. M. Matthiessen. N/gel: A System/c Grammar for Text Generatron. USC/Information Sciences Institute, RR-83-105, February 1983. The papers In this report will also appear in a forthcoming volume of the Advances in Discourse Processes Series, R. Freedle (ed.): Systemic Perspectives on Discourse: Selected Theoretical Papers from the 9th Internationa! Systemic Workshop to be published by Ablex. [Mann & Moore 811 Mann, W. C.. and J. A. Moore. “Computer generation of multiparagraph English text,” American Journal of Computational Linguistics 7, (l), January - March 1981. [Mann & Swartout 831 Mann. W. C. and W. R. Swartout. Knowledge Representation and Grammar: The Case of OWL and Nigel, USC/Information Sciences Institute, Marina del Rey, CA 90291 , Technical Report. 1983 I ( report in preparation.) [McKeown 821 McKeown. K.R., Generating Natural Language Text in Response to Questions about Database Structure, Ph.D. thesis, University of Pennsylvania, 1982. [Moore 801 Moore, R., Reasoning about Know/edge and Action. SRI International, Artificial Intelligence Center, Technical Note 191, 1980. [Moore & Mann 791 Moore, J. A., and W. C. Mann, “A snapshot of KDS, a knowledge delivery system,” in Proceedings of the Conference, 17th Annual Meeting of the Association for Computational Linguistics, pp. 51-52, August 1979. [Sacerdoti 771 Sacerdoti, E., A Structure for Plans and Behavior, Elsevier North-Holland, Amsterdam, 1977. [Swartout 811 Swartout, W. R., Producing Explanations and Justifications of Expert Consulting Programs, Massachusetts Institute of Technology, Technical Report MIT/LCS/TR-251, January 1981. [Swartout 821 Swartout. W., “Gist English generator.” in Proceedings of the National Conference on Artiiicial Intelligence, pp. 404-409. AAAI, August 1982. [Swartout 83a] Swat-tout. W. R. et. al., “Workshop on automated explanation production,” In ACM SIGART. ACM. 1983 . To appear. [Swat-tout 83b] Swartout. W., The Gist behavior explainer, 1983. (Submitted to AAAI-83.) 265
1983
20
212
RECURSION IN TEXT AND ITS USE IN LANGUAGE GENERATION1 Kathleen R. McKeown Department of Computer Science Columbia University New York, NY 10027 ABSTRACT In this paper, I show how textual structure is recursive in nature; that is, the same rhetorical strategies that are available for constructing the text’s macro- structure are available for constructing its sub-sequences as well, resulting in a hierarchically structured text. The recursive formalism presented can be used by a generation system to vary the amount of detail it presents for the same discourse goal in different situations. 1 Introduction Texts and dialogues often contain embedded units which serve a sub-function of the text or dialogue as a whole. This has been noted both by Grosz [GROSZ 771 in her observations on task dialogues and by Reichman [REICHM4N 811 in analyses of informal conversations. In this paper, I show how textual structure is recursive in nature; that is, the same rhetorical strategies that are available for constructing the text’s macro-struct,ure are available for constructing its sub-sequences as well, resulting in a hierarchically structured text. This .C i complements Grosz’s view of hierarchical text structure as a mirror of hierarchical task structure. A generation system can use recursion to generate a variety of different length texts from a limited number of discourse plans which specify appropriate textual structures. In the following sections, I present a formulation of recursive text structure, an example of its use in the fully implemented TEXT generation system [MCKEOWN 82A], and finally, a description of some recent work on the application of this mechanism to automatically generating the appropriate level of detail for a user. 2 What is Textual Recursion? Rhetoric& predicates (also termed coherence relu t ions) have been discussed (e.g., [GRIMES 751, [HIRST 811, [HOE3IsS 781) as a means for describing the predicating acts av:tilablc to a speaker. They delineate the structural relations between propositions in a text. Some examples are “identification” (identify an itcbm as member of a generic class), “analogy” (compare with a familiar object), and “particular-illustration” (exemplify point). [MCKEOWN 801 \MCKEOW’~ 82B[af11s4fowedp~~ssuch predicates could be combined to form a longer textual sequence serving a sing!e discourse purpose (for example, definition). These combinations were formalized as schenztr fa which embody text structures commonly used in naturally occurring texts, as determined by empirical analysis. This a.nalysis also indicated that the predicates may be applied recursively to describe the structure of a text at many levels. A predicate may characterize the structural relation of a single sentence or of a longer sequence of text, such as a paragraph, to preceding text. Schema.ta merely indicate how predicates may be combined to form longer sequences of tests having specific functions. Thus, they describe combinations of predicates which serve the function of a silzgle predicate. Textual recursion is achieved by allowing each predicate in a schema to cxpa.nd to either a single proposition (e.g., a clause or a sentence) or to its associated schema (e.g. a text sequence). As an example, consider the sequences of text shown in Examples 1 and 2 below. The structure of both of these texts is captured by the identification schema, a schema which describes the combination of predicates that are commonly used to provide definitions.’ In the first t,ext, sentence 1 identifies the hobie cat, 2 describes characteristic attributes, and 3 provides an example. The second text contains the same basic structure, except that the identification of the hobie cat is achieved by a textual sequence instead of a single sentence. This textual sequence (sentences 1-4) is also described by an instantiation of the identification schema. Note that any of the other predicates of either the higher level identification schema or the embedded definition could have been expandcrl by their associated schemata if the 1 This by NSF grant ff work w;l:: partially sup )ortcd hlC%l-07299, award(ld to the 1 cpt. of C:omputcr and f nformation Sc~cncc of the University of Pennsylvania. 2Th e schema itst>lf is not s1ww1~ here. That schcmatn allow for optional predicates accounts for the variations in the irlstantitrfions of the identification schema shown here. See ~MCl\;I~X>\-lrN 821 for a full dcscrintion of the schemata thcmscIvc~s. 270 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. author/speaker preferred to provide more detail. 3 ---------- Example 1 __________ Identification Schema 1. Identification 2. Attributive 3. Particular illustration 1. A hobie cat is a brand of catamaran, manufactured by the Hobie Company. 2. Its main attraction is that it’s cheap. 3. A new one goes for about $5000. ---------- Example 2 __________ Identification Schema Identification Schema 1. Identification 2. Identification 3. Analogy 4. Particular illustration 5. Attributive 6. Particular illustration 1. A hobie cat is a brand of ca.tamara.n, manufactured by the IIobie Company. 2. Catamarans are sailboats with two hulls instead of the usual one. 3. A catamaran is typically much faster than a sailboat. 4. Hobie cats, tiger cats, and pacific cats are a.11 catamarans. 5. As for the hobie cat, its main attraction is that it’s cheap. 6. A new one goes for about $5000. A question raised by the above two examples is that of when recursion is necessary. Clearly, there are situations where a simple sent,ence is sufficient for fulfilling a communicative goal, while in other cases, it may be necessary to provide a more detailed explanation. One test for recursion hinges on an assessment of a user’s knowledge. In the above example, a more detailed identification of the hobie cat might be provided if the speaker assumed the listener knew very little about sailing. An investigation of the possible tests for recursion is currently being undertaken. 3 Use of Recursion for Generation Recursion is a mechanism that can be used to allow a generation system to uniformly provide varying amounts of detail. In the TEXT system, which genera& 3As another example, note that the structure of the last three para raphs is also captured by the identification schema. 9 -Iere, schemata are identified in the first t’ arngraph on p.2. sccontl their recursive attrihufe spccificd Ii arngrnph, p.2), and an cxamplc given (third paragrnp , p.2). paragraph length responses to questions about database structure, some limited use has been made of recursion. In certain cases, the user’s question alone indicates that the user has a lack of knowledge and requires more detail. For example, when asking a question about the difference between two very different objects, the user indicates a total lack of knowledge about the items in question. In this case, lack of knowledge triggers the need to expand the identification of each item, using the identification schema to provide more detail. The system’s response to the question “What is the difference between a destroyer and a bomb?“4 illustrates this feature. In this example, sequence l-2 results from application of the identification schema for destroyer, sequence 3-4 from the identification schema for bomb, and the entire sequence (l-5) from application of a different schema (compare and contrast) which accesses the identification schema (see [MCKEOWN 82A] for more details). The destroyer and the bomb are each defined by providing two identifications (the second a result of recursion). No additional predicates (such as att,ributive or particular-illustration) from the identification schema are included for this response because the system has determined by other mechanisms that only generic class information is relevant [MCKEO\VN 801. _________ Example 3 -________ (difference DESTROYER BOMB)’ ; What is the difference between a destroyer and a bomb? ; 1. Identification destroyer ; 2. Identification ship ; 3. Identifica.tion bomb ; 4. Identification free-falling projectile ; 5. Inference 1. A destroyer is a surface ship with a draft between 15 and 222. 2. A ship is a vehicle. 3. A bomb is a frce- falling projectile that has a surface target location. 4. A free-falling projectile is a lethal destructive device. 5. The bomb and the destroyer, therefore, are very diffcrtlrit kinds of entities. 4 The TEXT system was implcmentcd on an ONR database containing information about military vehicles and weapons. The example is taken from this domain. 5TEXT gencratc>s the paragraph as shown (but without scntttntinl numbers) in rcbsponse to the functional c ucst ion notation (Tl:XT 113s no facilit v for parsing ‘r Criglish questions). C’ommc~nts show t lie ‘I’Znglisli version of tllc quest ion and the prctdicat 0s uwtl iI1 t Ilo rcsponsc. 271 4 Limits on Recursion It has been suggested (e.g., [CONKLIN 831) that tilt\ phenomenon described here may not actually be recursion per se since 1) there may be bounds on how many recursive pushes can be taken and 2) a speaker may not return in reverse order to every higher level dialogue from which a push was taken. The existence of bounds on the depth of recursion is not motivated by the occurrence of recursion in naturally occurring texts. My analysis suggests instead that many levels of nesting are possible, but that when such nesting occurs the text grows in length and may cover several pages [MCKEOWN 82A]. Grosz’s analysis of hierarchically nested dialogues also indicates that nesting can occur to many levels. Placing arbitrary bounds on the depth of recursion could conceivably limit a generation system in its ability to provide the kind of detail needed by a user in some given situation. The absence of limits on recursive depth, on the other hand, does not have detrimental side-effects as long as the system is capable of determining in what situations recursion is not necessary. Bounds on recursion are even more severely limiting on the generality of a generation system than this suggests. Note that if no recursion is allowed, the system will only be capable of producing texts of a uniform length unless further changes are made in the system. A single schema will consistently produce paragraph length text if its predicates are always expanded as single propositions. To generate longer texts, the system must either be capable of combining schemata appropriately (requiring further theoretical work on legal combinations of schemata) or new schemata must be developed which will generate longer sequences of text. If, on the other hand, recursion is allowed, then a limited number of schemata can be used to generate an infinite number of different length texts. A single schema produces infinitely many texts if its different predicates are expanded to their associated schemata instead of single propositions an d this expansion occurs at a.11 levels of the text. The use of unlimited recursion, therefore, allows for less work to be done in determining possible text orderings and, in theory, for the generation of arbitrarily long text,s from a small number of schemata. Currently, schemata for 4 predicates have been developed for the TEXT system which uses a total of 10 predicates. In the written texts that were analyzed, writers did return in reverse order to higher level texts from which a. push was taken, with the exception of cases where a push was taken on the last predicate in a schema. I would speculate that whether a speaker does ret#urn to every dialogue from which a push was taken may be affected by his/her memory for the past discourse. That memory is not perfect may cause higher level unfinished discourses to be skipped when finishing a sub-dialogue. If memory is the cause, then well-planned writing should exhibit the phenomenon of imperfect recursion less OftCJl since planning, re-reading, and re-writing is possible. ‘1’11 is hypothesis could be empirically tested. 5 Current Directions The recursive mechanism can be used to allow a generation system to provide either a detailed or succinct response to the same question under different circumstances. Clearly, an analysis of the factors that trigger or inhibit recursion is critical for use of this capability and this is an endeavor that is currently underway. A preliminary analysis indicates that these factors would at least include the following: The user’s level of expertise: A user comes to a system with apriori knowledge on the subject in question. The system’s knowledge of that level (whether deduced from interaction or explicitly stat,ed) will influence how much it should say. Note that this is not a simple influence. An expert may in certain situations be able to handle more detail than a novice. The past, discourse: What the user has learned through the past discourse influences level of detail since previous discussion of a subject may mean that less can be said about it in a current response. What the system has learned through the past discourse affects level of detail as well: the user’s acceptance of detail or request for detail may indicate to a system that it can provide a particular type of detail without being asked. The user’s overall goal in interacting with the system: Whether the user is using the system, for instance, to quickly retrieve a specific fact or to learn about or from the system will require different levels of detail. The user’s specific goal in asking a particluar question: If the user’s question is only one step towards acquiring the information necessary for a higher level goal, that goal may dicate how much information is required. Feedback from the user: \Vhile the goal of this research is to anticipate the user’s needs fQr detail before s/he states them explicitly, in actual converation, people often do explictly stat,e that they have absorbed information and are ready for more (e.g., backchannel noises such as LLum-hunl”) or that they have not understood. Such feedback can also be used in a system. While some of these factors are very difficult to implement (e.g., determining the user’s goal), others are, in fact, tractable. Tracking of past discourse, for csample, has been used previously to avoid repetition. [MCDONALD 80; 1>.4VI<>- 791. The recursive mechanism 272 is also viewed as an important element in providing rc- explanations. That is, a user’s dissatisfaction with a given response may provide the trigger to recurse on a predicate that was previously unexpanded. This effort is being conducted with the goal of implementing an information/expert system that can provide explanations in the domain of advising students about course schedules. This domain requires the capacity for communicating at different levels of detail and for providing re-explanations since students as users may frequently be dissatisfied with an explanation (for example, why they cannot take a course), may simply want to talk at length about a course of action, and may want to explore alternate solutions to a problem. 6 Conclusions In this paper, a formalism which represents the hierarchical nature of texts in terms of recursive textual structure has been presented. This augments previous work on the structure of sub-dialogues by capturing another dimension along which sub-sequences of text are related to the text as a whole. Furt,hermore, this formulation of texb structure allows a generation system to use the same schema to generate both short and more detailed descriptions. 1l;hile this has already been used in a limited way in the TEXT generation system, the eventual goal is t,o develop a full analysis of decision mechanisms for recursion and embody this in a generation system which can provide explanations at varying levels of detail as well as re-explanations in response to a user’s dissatisfaction. ACKNOWLEDGEMENTS I would like to thank hlichael Lebowitz, Bonnie Webber, and Kathy McCoy for their comments on drafts of this paper. 7 References [CONKLIN 831 Conklin, J., forthcoming dissertation, University of Massachusetts, Amherst, Mass., 1983. [DAVEY 791 Davey, A., Discourse Production, Edinburgh University Press, Edinburgh, 1979. [GRIMES 751 Grimes, J. E. The Thread of Discourse. Mouton, The IIague, Paris. (1975). [GROSZ 771 G rosz, B. J., The representation a.nd use of focus in dialogue understanding. Technical note 151, SRI Internationa.1, Menlo Park, Ca. (1977). [IIIKST 811 Hirst, G., Discourse-oriented anaphora resolution: a review. American Journal of Computational Linguistics, Vol. 7, No. 2. (1981). pp. 85-98. [HOBBS 791 Iiobbs, J. R., Coherence and coreference. Cognifi7le Scie?2ce, 3 (I), January-March (1979). pp. 67-90. [MCDONALD 801 hlcDonald, D.D., Natural language production as a process of decisions making under constraint. PhD. Dissertation, MIT, Cambridge, Mass. 1980. [MCKEOWN 801 h4cKeown, K.R., Generating relevant explanations: natural language responses to questions about da.tabase structure, Proceedings of the First Annual National Conference on Artificial Itltelligence, Stanford, Ca., 1980, pp. 306-O. [MCKEOWN 82A] Mckeown, K.R., Generating natural language text in response to questions about database structure. Ph.D. Thesis, MS-CIS-82-05, University of Pennsylvania, Philadelphia, 1982. [MCKEOWN 82B] McKeown, K.R., The TEXT system for natural language generation: an overview, Proceedings of the 20th Annual Afeeting of the Association for Computational Linguistics, Toronto, Ontario, Canada, 1982, pp. 113-20. [REICHh4AN 811 R eic lman, 1 R., Plain speaking: a theory and grammar of spontaneous discourse. Ph.D. Thesis, Harvard University, Cambridge, Ma., 1981. 273
1983
21
213
REASONS FOR BELIEFS IN UNDERSTANDING: APPLICATIONS OF NON-MONOTONIC DEPENDENCIES TO STORY PROCESSING Paul O'Rorke Coordinated Science Laboratory University of Illinois at Urbana-Champaign Urbana, IL 61801 ABSTRACT Many of the inferences and decisions which contribute to understanding involve fallible assumptions. When these assumptions are under- mined, computational models of comprehension should respond rationally. This paper crossbreeds AI research on problem solving and understanding to produce a hybrid model ("reasoned understand- ing"). In particular, the paper shows how non- monotonic dependencies [Doyle791 enable a schema- based story processor to adjust to new information requiring the retraction of assumptions. I INTRODUCTION Many of the inferences and decisions involved in understanding 'jump to conclusions" which might later turn out to be erroneous. For example, upon reading "John put a quarter in the slot", a video game addict may jump to the conclusion that John is playing a video game. If the addict is then told "John put another quarter in and pushed a button for a cola", he should revise his beliefs. How can a computational model of understand- ing adjust efficiently to new information which invalidates previous assumptions? The solution proposed in this paper applies a view of rational thought developed in AI research on problem solving [Doyle791 [de Kleer791 [Char- niak80] to current schema-based models of under- standing. The resulting hybrid view of comprehension will be referred to as "reasoned understanding". In this view, comprehension is the work of rules which infer and justify new beliefs on a basis of old beliefs (many of which are com- pactly specified with the aid of schemata). Jus- tifications of beliefs record their dependence on inference rules and other beliefs. When fallible assumptions lead to conflicting beliefs these dependencies may be used to determine and revise the incompatible assumptions underlying the con- flict. --we- This report describes work done in the AI group of the Coordinated Science Laboratory at the Univer- sity of Illinois at Urbana-Champaign. It was sup- ported in part by National Science Foundation Grant IST 81-20254 and by the Air Force under grant F49620-82-K-0009. This paper describes the design and operation of a schema-based story processor called RESUND. The design applies non-monotonic dependencies and associated processes (like dependency-directed backtracking [Doyle79]) to solve some basic belief revision problems which arise in natural language processing. Consider the following example from CCollins801. 1. "He plunked down $5 at the window." 2. V'She tried to give him $2.50, but he refused to take it." 3. "So when they got inside, she bought him a large bag of popcorn.n People commonly make two interesting mistakes on reading this window text: The first mistake is made by people who assume after the first sentence that "hen is mak- ing a 5$ bet at a horse race. In spite of this, one usually concludes with the assumption Ithey" are going to see a movie. How might a story pro- cessor retract its presumption of one scenario (eg. "going to a horse race") in favor of another (ngoing to the movies")? The second common mistake is the initial identification of the second sentence as an attempt to return change, in spite of nhisn dis- turbing refusal to accept it. Later, we see that 'Ishen was not the attendant at the window, but that nshe" is accompanying "him". Her attempt to give him 2.50$ is re-explained as an attempt to pay her own way. This new interpretation of her action makes his refusal comprehensible. How might a story processor recover from mis- identification of events and objects? Section three shows how a schema-based story processor can recover from these mistakes using the techniques described in section two. 306 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. II DEPENDENCIES IN UNDERSTANDING Reasoned understanding is a view of comprehension inspired by AI research on problem solving and decision making CDoyle801. It inher- that attitudes (eg. beliefs) are its the notion the important indicators of the state of comprehension. In fact, understanding is viewed as a process of transition from one set of "current" attitudes to another. Furthermore, justifications for attitudes play a key role in determining which ones are currently held. For example, reasons supporting possible beliefs not only determine which ones are currently believed, but also pro- vide the basis for inferring new beliefs and retracting old ones. This section sketches the design and operation of a schema-based story pro- cessor (called RESUND) compatible with reasoned understanding. The story processor is still under construction at the time of this writing. The "knowledge base" of the story processor is organized into bundles of assertions called schemata. A schema intended to capture knowledge about an event includes variables for objects which play roles in the event and a list of asso- ciated schemata. Relationships between the asso- ciated schemata include primitive temporal and causal links. Top down script elaboration infer- ences (a la SAM) CCullingford811 are supported by a special elaboration relation which specifies the consequences of belief in an assertion that an event has occurred. Construction of intentional explanations (a la PAM) CWilensky811 is supported by associations such as X is an aim (goal) of Y and X is a method for achieving Y. In addition, schemata specify constraints and default values for their role variables. A crucial point about inferences in under- standing is that they are often presumptive. They generate beliefs not just by logical deduction, but also by making assumptions of various kinds. The main difference between RESUND and previous story processors is that RESUND uses non-monotonic dependencies to compensate for the fact that its inference processes are fallible. A collection of inference rules (a la AMORD) [de Kleer791 generate and justify new assertions representing beliefs. Some inference mechanisms which contribute to the construction of explanations of sequences of input events are elaboration, intentionality, identifi- cation and criteriality. John recieved some tickets from someone) are inferred by elaboration. Intentionality rules gen- erate intentional explanations similar to those constructed by PAM [Wilensky811 using the informa- tion about goals and methods supplied by schemata. When two descriptions appear to refer to the same event or object, RESUND's identification inference rules make the assumption that the descriptions are co-referential. Bottom up schema invocation is also treated as a presumptive inference in RESUND (called criteriality). Event induced schema activation is the simplest kind of criteriality assumption. This happens when a sub-event of a complex event causes RESUND to assume the complex event occured (as when "She bought popcornn sug- gests "she went to the movies") CSchank771 CDeJong791. AI natural language processing systems are bound to make mistakes like those made by people reading the window text. RESUND recognizes mis- takes which reveal themselves in the confusion of conflicting beliefs. The most common conflicts in the examples studied to date have been identity conflicts and other violations of schematic con- straints or defaults. Identity conflicts arise when two objects or events are assumed to be identical and not identical simultaneously. Schematic constraint violations occur when a res- triction on variables or other schemata associated with a given schema is broken. When such conflicts arise, a process like dependency-directed back- tracking determines the underlying incompatible assumptions by looking back along dependencies. Unfortunately, in natural language processing, it wouldn't do to just arbitrarily rule out one of these assumptions in order to resolve the con- flict. Thus, RESUND requires a method to decide which assumption should be revoked (which assump- tion is weakest). Elaboration rules expand the definitions of complex concepts in schemata to capture inferences similar to SAM's [Cullingford top down script applications. For example, when RESUND "comes to believen that a complex event has taken place, leg. John purchased some tickets from someone), the definitional consequences (John paid someone, The current design calls for a collection of preference policies which represent different cri- teria for gauging the relative strength of incom- patible assumptions. Some of these policies imple- ment text comprehension principles and problem solving strategies reported in [Collins801 and [Wilensky83]. Another class of policies is based on RESUND's taxonomy of inferences and the notion that some assumptions made in natural language processing are inherently weaker (more fallible) than others. These policies prefer constraints and elaboration inferences over defaults and see iden- tification, intentionality, and criteriality assumptions as the most likely losers in a con- flict. Unfortunately, several preference policies may be applicable to a given conflict. The current design calls for the simplest possible conflict resolution: a total order on the preference poli- cies. When several are applicable, the strongest one is allowed to choose the weakest assumption. 307 III EXAMPLES To see how RESUND will handle mis- identification, consider the following simplified variant of the window text. Mis-identification Example. 1. He put down $5 at the Thunderbird theatre ticket window. 2. She gave him $2.50. 3. When they got inside, she bought her- self a bag of popcorn. The first sentence invokes a schema about "going to the movies." Elaboration of this action includes "purchasing tickets," which includes "paying for the tickets," and the "return of change" (if any). The placement of $5 at the ticket window is identified as part of npayingn for some tickets. This means "hew is the BUYER of tickets, a member of the PARTY "going to the movies". By convention, the roles in this and other schemata will be in upper case. The action in the second sentence is identi- fied as the "return of change". This isn't the only possible identification of the action, because there are several other transfers of money associated with ngoing to the movies". None of the other identifications is compatible with this one, and at this point "return of change" is pre- ferred because it is contained in the schema describing "buying tickets", as was the initial transfer of payment. This action identification implies two new role identifications: she is the TICKET-BOOTH-ATTENDANT and the $2.50 is his CHANGE. Both actions in the third sentence are iden- tified as actions of members of the PARTY going to the movie, (namely "entering the theater" and "purchasing refreshments"). Thus, "she" is seen to be a moviegoer. This violates a schematic con- straint on TICKET-BOOTH-ATTENDANT which reflects the fact that normally, the attendant is not a member of one's party when one goes to the movies. Whether this constraint is learned by experi- ence or derived from more basic constraints, recognition of the constraint violation triggers dependency-directed backtracking. Data- dependencies associated with the inconsistent beliefs (TICKET-BOOTH-ATTENDANT in PARTY, not TICKET-BOOTH-ATTENDANT in PARTY) and their ances- tors are examined to determine the assumptions underlying the conflict. The belief that the ATTENDANT should not be in PARTY is strong because it is based on constraints in the wgoing to the movies" schema. The belief that ATTENDANT is in PARTY depends on she = ATTENDANT and she in PARTY. Ultimately, the conflict depends on the following identification assumptions. Al. The purchase of refreshments by a member of the PARTY attending the movie = "she bought popcorn.n A2. The return of change by the TICKET- BOOTH-ATTENDANT = "she gave $2.50." There is no alternative to identification Al in the "going to the movies" schema, but A2 does have an alternative: members of PARTY sometimes repay (or prepay) the BUYER who purchases the group's tickets. For this reason, A2 is considered weaker than Al, and the constraint violation conflict is resolved by ruling A2 out. Next, consider an example of the problem of mis-activation of schemata. Mis-activation Example. 1. John put two quarters in the slot. 2. Then he started his first game. If you originally thought John was going to get a cola from a vending machine, as he did earlier in the paper, then you had to retract this assumption with any conclusions founded on it. This example, like the horse-races to movies switch, is just about the simplest kind of mis- activation. We have worked out dependencies for event induced schema activation which enable RESUND to recover from this sort of mistake: The insertion of quarters in a slot is an action which invokes the wcoke machine" and "video game" schemata by event induced activation LDeJong791 Elaborations of these schemata include corresponding insertions of change, as well as inferences about what will happen next, etc. An identity conflict arises because the insertion of change in the coke machine schema is incompatible with the insertion in the video game schema, (if they are not compatible, they cannot both be identical to the input event). Dependency- directed backtracking determines that one of the schema invocations must be retracted. A prefer- ence policy decides to retract video-game on very weak grounds, perhaps because the last time the system saw this sentence, "coke machine" turned out to be the right schema. The second sentence contains an event which is identified as part of the dead nvideo-gamew schema. The fact that there is no alternative identification is seen as an argument against the weak decision to rule out the video game scenario. The original contradiction comes back in, but now there is a strong reason to prefer nvideo-gamen over wcoke-machinew: namely that it explains more input events. 308 IV CONCLUSION This paper argues that better models of understanding can be constructed by applying views and techniques originally developed in AI research on problem solving. In particular, nearly all current story processors have no reasons for their beliefs; so when they make inferences and deci- sions which jump to false conclusions, they have no recourse to reasoned retraction. ARTHUR, MCARTHUR, and JUDGE are exceptional, in that they attempt to recover from inference errors CGranger801 CGranger821. However, they appear to concentrate exclusively on revision of intentional explanations, and do not appear to use dependency-network-maintenance techniques to sup- plant incorrect explanations. The usefulness of reasoned understanding as part of a model of human comprehension is limited by the fact that it ignores affect and emotion. In addition, we have only described a handful of belief revision methods and they are so simple that people usually do them unconsciously. Nevertheless, we expect non-monotonic depen- dency networks and associated processes (like dependency-directed backtracking) to become integral parts of future applied AI natural language processing systems. In an effort to help make this happen, we have begun implementing RESUND. The design incorporates schemata and an inference taxonomy derived from a schema-based story processor constructed in the summer of 1982. We have worked out detailed dependencies and preference policies which seem necessary for several examples like those presented in section III. When the implementation is complete we will run experiments to verify that the design works as planned. We expect future experiments to lead to the discovery of new types of inference and new preference policies, if not to radical changes in the design of the story processor. ACKNOWLEDGEMENTS My sincere thanks to G. DeJong, M. Dorfman, C. Debrunner and the other reviewers of this paper. REFERENCES [Charniak80] E. Charniak, C. Riesbeck and D. McDermott, "Data Dependencies," in Artificial s ProQ-mmin@; Lawrence Assooiates,'Hillsdale, N.J., Erlbaum 1980, 193-226. [Collins801 A. Collins, J. S. Brown and K. M. Larkin, nInference in Text Understanding," in IheoreticaL Issues .& Reading Comnwsion, R.J. Spiro, B.C. Bruce, and W.F. Brewer led.), Lawrence Erlbaum Associates, Hillsdale, N.J., 1980, 385-407. [Cullingford [de Kleer791 CDeJong791 [Doyle791 [Doyle801 CGranger801 CGranger821 CSchank771 CWilensky811 CWilensky831 R. Cullingford, "SAM," in Inside Understu : Jz.uLc Lawrence Erlbaum Associates, Hillsdale, N.J., 1981, 75-119. J. de Kleer, J. Doyle, G. L. Steele and G. J. Sussman, "Explicit Control of Reasoning,n ' . . . Pe SD ctive, vol. 1, P.H. an: E.H. Winston Brown (ed.), MIT Press, Cambridge, Massachussetts, 1979, 33- 92 l G. F. DeJong, "Skimming Stories in Real Time: An Experiment in Integrated Understanding," 158, Yale University Dept. of Comp. Sci., New Haven, Conn., May 1979. J. Doyle, "A Truth Maintenance . . System,w Art1 iclal J Xi,, (1979), 2,;f2,2. ntelliaence J. Doyle, "A Model for Deliberation, Action, and Introspection,w Artificial Intelligence Technical Report-581, MIT Artificial Intelligence Lab., Cambridge, MA, May, 1980. R. H. Granger, "Adaptive Understanding: Correcting Erroneous Inferences," 171) Yale Univ. Dept. of Comp. Sci., New Haven, Conn., January 1980. R. H. Granger, "Judgemental Inference: A Theory of Inferential Decision-Making During Understanding, II Proc. ti -Fourth Annual Conf. af && Cognitive ,Science a, Ann Arbor, MI, 1982, 177-180. R. Schank and R. Abelson, Scrints, PlZ, Goals and Understanding: & Inauirv &&Q Human gn0wJedge Structures, Lawrence Erlbaum, Hillside, NJ, 1977. R. Wilensky, "PAM," in Inside Comnuter Understanding: Jlzias Progr ms Plus Miniature Sohanka and C.K. Riesbec:' (e,":y: Lawrence Erlbaum Associates, Hillsdale, N.J., 1981, 136-179. R. Wilensky, Eknnim Understanding: Lbmxcm& LQ mk?iik mutational Reasoning Addison-Wesley, Reading, Mass.: 1983. 309
1983
22
214
The formalization of the domain is essential for solv- ing the following basic problems which are associated with the semantic processing of text. (1) (2) (3) (4) (5) establishing referents for the noun phrases; finding appropriate mappings from the syntactic constituents of the parse into the underlying semantic representation of the verb, (For inference-driven semantic analysis this representa- tion is defined primarily by a semantic predicate associated with the verb and semantic roles acting as arguments to the predicate. The syntactic consi- tuents are essentially mapped directly onto these semantic roles.); using pragmatic information to assign fillers to semantic roles do not have an explicit syntactic realization, (the term “pragmatic” is used to refer to both discourse knowledge and general and domain-dependent information); applying inference rules to expand the representa- tion of the verb into a more detailed representation that fulfills the requirements of the processing task; constraining allowable inferences so that this semantic representation does not become explosive; INFERENCE-DRIVEN SEMANTIC ANALYSIS Martha Stone Palmer SDC - A Burroughs Company and University of Pennsylvania ABSTRACT A primary problem in the area of natural language processing is the problem of semantic analysis, This involves both formalizing the general and domain- dependent semantic information relevant to the task involved, and developing a uniform method for access to that information. Natural language interfaces are gen- erally also required to have access to the syntactic analysis of a sentence as well as knowledge of the prior discourse to produce a semantic representation ade- quate for the task. This paper briefly describes previous approaches to semantic analysis, specifically those approaches which can be described as using templates, and corresponding multiple levels of representation. It then presents an alternative to the template approach, inference-driven semantic anaylsis, which can perform the same tasks but without needing as many levels of representation. 1. hltroduction Inference-driven semantic analysis is specifically designed for finite, well-defined, i.e., limited, domains. The domain on which a Prolog implementation of this method was tested consists of physics word problems for college students involving pulley systems. Each problem is stated in English sentences that completely describe a miniature world of physical objects and relationships between those objects, The goal of the natural language processor is to produce a semantic representation of each problem that is detailed enough to enable a com- puter program to produce the correct solution of the problem. This semantic representation consists of a set of partially instantiated logical terms known as semantic predicates. (6) appropriately integrating the final representation of the clause with the representations of prior clauses. Previous approaches to semantic analysis suffer from one of two drawbacks. If an attempt is being made to capture linguistic generalizations such as “case,” the processing methods degenerate into verb specific pro- cedures that are completely domain dependent, making the implementations difficult to transport to other domains [Schank], [Simmons]. Placing the emphasis on more transparent processing, i.e., separating the relevant linguistic information from the computational methods used to process that information, results in processing techniques that use several levels of representation. While computationally more modular, this approach does not adequately capture linguistic generalizations [Woods], [Pereira and Warren]. Not being able to use these generalizations efficiently leads to unecessary redundancies which make these systems cumbersome for large domains [Palmer, 811. In general, these more modular semantic proces- sors can be described as using basically three levels of semantic representation which are illustrated below. The first level, referred to here as the template level, corresponds to a set of patterns that represent the pos- sible syntactic realizations of sentential units for an indi- vidual verb. Each <slot> in the template represents the position of a syntactic constituent in a particular realiza- tion. The slots usually have semantic markers associ- ated with them, referring to the semantic role the syn- tactic consituent is expected to play with respect to the verb. (The semantic markers in the example refer to physical objects, PHYS-OBJ, and location points on those objects, LOC-PT.) Specific syntactic parses can be matched directly with these templates, Matching parses onto templates achieves the mapping of the syn- tat tic consituents onto the underlying semantic representation, task 2 from above. The second step is to match the templates with an intermediate level, the canonical level which is some- times termed the “case-frame level,” The canonical level consists basically of the verb or the predicate chosen to represent the verb, and a union of all of the semantic roles the syntactic consituents can be associated with. Inference rules can then be applied to this intermediate level to first fill unfilled semantic roles, task 3, and then expand the representation of the verb, task 4, to pro- duce the final and third level, the predicate level. Care must be taken to constrain the application of these inference rules, task 5, and then task 6, the integration Of the representation with the prior discourse, must be achieved. From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. SENTENCE: A particle is attached to the end of a string. TMEPLATE: <PHYS-OBJl> attach to <LOC-PT2> of <PHYS-OBJ2> CANONICAL FORM: attach(<PHYS-OBJl>,<PHYS-OBJZ>, <LOC-PTl>, <LOC-PT2>) PREDICATES: contact(PHYS-OBJl,PHYS-OBJ2) locpt I LOC-PTl ,PHYS-OBJl locpt LOC-PTB,PHYS-OBJ2 1 sameplace(LOC-POINTl,LOC-POINT2) This paper presents an alternative to the template approach which is capable of performing the six tasks simultaneously: inference-driven semantic analysis. Sec- tion 2 introduces domain-specific inference rules as lexi- cal entries for verbs and gives an example of how these rules can be applied to derive appropriate semantic representations. Section 3 suggests the task of finding appropriate mappings from syntactic constituents to semantic roles, (task 2), as being largely responsible for the development of the template approach with its corresponding intermediate level of semantic represen- tation. The method by which inference-driven semantic analysis performs the mappings while expanding the verb representation, (task 4), without having to make use of a similar intermediate level of representation, is explained here as well as in section 5. Sections 4, 6, and 7 describe how the remaining tasks are also performed during the expansion of the verb representation, The only task not addressed specifically is task 1, reference evaluation. Noun phrases are assumed to be fully deter- mined alo 7 the lines suggested by Mellish’s Incremental Evaluation Mellish]. 2. Domahqecik inference rules The lexical entries of the verbs take the form of inference rules. Producing a semantic representation of a particular sentential unit can be seen as “proving” that the verb involved has been used appropriately for the domain. Prolog is an obvious language for such an approach and the rules are expressed as Prolog clauses. (In conventional Prolog notation Q <- P corresponds to P -> Q; strings starting with upper-case letters correspond to variables; and predicates, function symbols, and con- stants are all lower-case.) For example, rule Rl, the lexi- cal entry for “attach,” can be read as, “A contact between an object, (Ol), and another object, (02), can be expressed using the verb attach.” Rl: attach <- contact(objectl(Ol),object2(02)). The goal is to use Rl in producing an adequate semantic representation for sentence like, Sl: “A particle is attached to a string at its right end.” Sl contains two major syntactic constituents, the “parti- cle” which is the, SUBJECT and the “string,” which is the object of “TO.” (Prepositional phrases are indicated by “PP” preceded by the preposition involved, as in “TO- PP.“) The next section explains how the following seman- tic representation is produced, contact(objectl(particle),object2(string)b given a predi- cate representation of the syntactic information: SUBJECT(particle), TO-PP( s tring) 3. Performing Mappings Semantic representations generally associate a dis- tinct set of semantic roles with each individual verb. One of the main goals of any semantic processor is to provide an appropriate mapping between the syntactic constituents of a parsed clause and the semantic roles associated with the verb. Three factors complicate the mapping: to (2) (3) the large number of choices available for syntactic realization of any particular semantic role, the ability of syntactic constituents to indicate several different types of roles given appropriate contexts, and semantic role interdependencies, i.e., the appropri- ateness of a mapping for a particular semantic role is often dependent on the mappings given to the other semantic roles. These complications have previously been coped with by creating sets of templates for a verb, one for each syntactic realization. Each set of templates must have associated with it an individual set of domain- specific inference rules so that deeper semantic representations can then be derived. Domains often involve semantically or syntactically similar verbs that still have to be dealt with on an individual basis using this approach, resulting in unecessary redundancies. Inference-driven semantic analysis uses a set of “maooh rules” to guide the instantiation of predicate arguments with the referents of surface syntactic consti- tuents. The mapping rules make use of intuitions about syntactic cues for indicating semantic roles first embo- died in the notion of case [Fillmore, 681. For the applica- tion of these rules to be useful, it is essential they preserve the same semantic role interdependencies han- dled by templates. This is accomplished by making the application of the mapping rules “situation-specific,” Some mapping rules, such as “SUBJECT to AGENT,” are quite general and can apply in many situations. Other rules, such as “WITH-PP to INSTRUMENT,” are much less general, and can only apply under a set of specific cir- cumstances. For some verbs, in order for the WITH-PP rule to apply, the AGENT must be mentioned explicitly in the syntactic realization, as in “John broke the vase with a hammer.” Checking for the mention of the AGENT disallow inappropriate applications of this rule, such as “* The vase broke with a hammer.” As described below’ the application of mapping rules can be constrained by the use of a predicate environment. An example of an unconstrained mapping rule involves the object1 from Rl. Objectl’s are similar to PATIENTS, and like PATIENTS can usually be indicated by the SUBJECT. For an unconstrained rule, the predicate environment is simply a variable, Y, as in Ml. Ml: objectl(X) <- SUBJECT(X) / Y Given Sl, application of this rule would result in objectl(01) being instantiated with “particle.” The mapping rule for the object2 from the “attach’ example is not as general, and gives an example of how rules can be constrained. An object2 of a contact rela- tionship can be indicated by a TO-PP, but an object2 of a support relationship cannot. In order to make the appli- cation of the rules situation-specific, the predicate environment can be partially instantiated. It contains information about the “context” of the semantic role, exemplified here by the relation name and the other arguments. The predicate environment for object2 on the right hand side ::contact(objectl(Ol),object2(02)).” of Rl is By associating a contact” predicate environment with the TO-PP map- ping rule, as in the following example, the rule can be 311 restricted to object2’s which predicates, This allows the instantiated with the “string.” are arguments to contact term object2(02) to be M2: object2(X) <- TO-PP(X) / contact(Y,object2(X)) The associated predicate environments are equivalent to Jo&-Levy Local Constraints, and as such act as filters on the possible mappings for the arguments, so that the final set of mappings arrived at is within the context- sensitive limitations of the domain [Joshi and Levy], [Pal- mer, 831. Section 8 describes the implementation of the semantic processor that aplies Rule M2 to the “attach“ example, instantiating object2f02) with “end.” This achieves the preliminary representation “contact(objectl(particle),object2(end))” mentioned in the previous section. 4. lW.ling gaps irk semantic roles It is generally accepted that many semantic roles, such as AGENTS and INSTRUMENTS, are syntactically optional, and do not always appear in the surface struc- ture of a sentential unit. Just because these roles are not mentioned does not guarantee that they do not need to be filled. In “The door was opened with a key,” the passivization and the presence of the INSTRUMENT “key” indicate clearly that an AGENT exists although s/he is not referred to. It is sometimes possible to deduce the referent of the AGENT from pragmatic information about the local context, as in: How did the with a key. burglar get inside? The door was opened For the semantic processor to perform this type of deduction it must have access to domain-dependent information in the form of inference rules, i.e., prag- matic information. Semantic role fillers that are not made explicit in the syntactic realization of a verb can sometimes be retrieved from the local context or hypothesized from general knowledge about the domain. There are examples of these implicit semantic role fillers in the mechanics domain, In “the pulley is suspended from a pulley,” it is clear from pragmatic information about suspension that a STRING, or some type of flexible line segment, is doing the “suspending,” but it is never mentioned explicitly. This sentential unit is followed by “and offset by a particle,“ meaning that the pulley is being counter-balanced by a particle, as in the following figure. The appropriate representation for “offset” can only be achieved if pragmatics can supply “string” as a default value in the preceding representation of “suspend.” The way this is accomplished is explained in more detail below. “pulley is suspended from a pulley” “and offset by a particle” Inference-driven semantic analysis makes a distinc- tion between semantic roles that are syntactically obli- gatory or optional, and semantic roles that are semantically obligatory or optional. Traditionally, syn- tactically obligatory semantic roles have to occur in a syntactic realization of the sentential unit and syntacti- cally optional roles do not, Roles that are considered to be syntactically optional but semantically obligatory are termed essential roles, and the classification of semantic roles as semantically optional, essential or obligatory is used to constrain the application of pragmatic inference rules as follows: Semantically optional roles are simply marked as “absent,” essential roles are f’llled by deduc- tion, and unfilled obligatory roles cause failure resulting in the derivation of a new set of mappings. Fillers for essential roles can be deduced in any of three ways. (1) There can be known default values associated with the role. (2) A possible filler can be hypothesized from general world knowledge. (Default values are really just short cuts to hypothesizing fillers.) (3) The filler can be supplied by context as in the bur- glar example. In this domain, intermediaries, (similar to INSTRU- MENTS), are considered to be essential roles, so when an intermediary is not mentioned explicitly as in “a pulley is suspended from another pulley,” the processor allows pragmatics to supply a “string” as a default value. The inference rules associated with “offset,” in trying to represent a “counter-balancing” event, need to know what the “pulley” has been supported by in order to copy the support relationship. Since the “supporter” of the pulley, the “string” was filled in the analysis of “a pulley is suspended from another pulley,” local context can now supply that “string” as the supporter of the “parti- cle.” This section and the preceding section have explained how semantic roles can be filled by syntactic constituents or by pragmatic deduction, tasks 2 and 3. These two tasks are simply two different methods of finding instantiations for the predicate arguments, and can be performed as part of the application of rule Rl. The ability to instantiate arguments by syntactic consi- tuents or by pragmatic deduction is essential for the correct integration of the sentence representation within the current model of the scene being described, as explained in the following sections. 5. Inferring relationships m semantic roles In the template approach, the canonical level lists the semantic roles associated with the verb, but does not make the relationships between these semantic roles explicit. What does it mean for John to be the AGENT of “break” and for the vase to be the PATIENT? In going from the canonical level to the predicate level, inference rules must be applied to spell out these relationships. This is equivalent to task 4, expanding the verb represen- tation, For inference-driven semantic analysis, the applica- tion of further inference rules such as R2 make the rela- tionships between the semantic roles explicit. The initial inference rule, Rl, differed from the canonical level by not including all of the semantic roles that could be included in the canonical level. Now the arguments of R2 provide the semantic roles that were not included in Rl. An example sentence containing several of the optional semantic roles found in this rule is, “The string has a weight attached at its left end.” The following rule, R2, can be read as “If a location point on an object, locpt Ll , I 1 and a location point on another object, locpt L2 are at the sameplace, then the objects are in contact with each other.” Location points are classified as essential roles, so that if they are not filled by syntac- tic constituents they have to be deduced. R2: 312 8. Controlling inferences Limiting the application of further inference rules to rules involving semantic roles helps to solve the problem of uncontrolled inferences, task 5. Other inference rules may come into play when pragmatic information is used to fill semantic roles, but the application of pragmatic rules is constrained by the role-filling task, R2 also demonstrates the high degree of generalization that can be achieved, since rules such as R2 are shared by most of the verbs in the domain, Support and location rela- tionships all eventually require contacts being made explicit. They can also all have location points for the contacts indicated by AT prepositional phrases, another generalization. 7. Integrating semantic representations To achieve an appropriate representation for a sen- tence, the piece of the scene being described must be represented accurately, and this representation must be integrated correctly with the current model of the scene derived from previous sentences, the 6th task. Semantic roles play an important part in the necessary integration since pragmatic information can sometimes use objects that have already been described to fill roles that are not mentioned explicitly in the sentence. This was illus- trated by the “offset” example. Another important com- ponent of successful integration is reference evaluation. Previously described objects can be referred to directly in order to provide new information about them. Correct evaluation of such references is crucial to distinguishing between the information in the sentence that is “given” and the information that is “new.” In summary, a key component of inference-driven semantic analysis is the division of domain-specific infer- ence rules into rules that explicitly spell out relation- ships between semantic roles, and rules that provide domain information useful for deducing fillers for these roles. As the first set of rules is applied to expand the semantic representation, the second set can be used along with syntactic mapping rules to instantiate the arguments of the first set. In our example, mapping rules Ml and M2 achieved the following instantiations for the arguments of Rl, where the sentence being analyzed was, “A particle is attached to a string at its right end.” contact(objectl(particle),object2(string)) The R2 is applied to further expand the representation, and the following predicates are produced: locpt(locpt(particle),objectl(particle)) locpt(locpt(rtend),object2(string)) sameplace(locpt(particle),locpt(rtend)) Another mapping rule gets applied to fill in the location point of the string with “r-tend,” and pragmatics decides that a particle, being the shape of a point, can be its own location point. These inferences correspond to an appropriate semantic representation of the sentence, and are produced directly from the original set of syn- tactic constituents by applying the aforementioned rules. This process simultaneously performs the six tasks outlined in the introduction. In performing these tasks, it is not necessary to go through levels of representation corresponding to templates and case- frames, since alI of the information normally contained at these levels is now contained in the mapping rules and the inference rules themselves. 8. Implementation The semantic processor draws inferences and instantiates arguments by imposing a procedural interpretation on the inference rules very similarly to the way that Prolog imposes a procedural interpretation on Horn clauses. The verb inference rules are in fact Horn clauses, and the arguments to the predicates are terms that consist of function symbols with one argu- ment, The procedural interpretation drives the applica- tion of the inference rules, and allows the function sym- bols to be “evaluated” as a means of instantiating the arguments. The predicate environments associated with the constraints on instantiation correspond to possible snapshots of the procedural interpetation of the rules. These allow the same argument to be constrained differently depending on the instantations of the other arguments or on the particular predicate. The inferences that are drawn in this way correspond to the set of predi- cates that make up the semantic representation of the clause, ACKNOWLEDGEMENTS I would especially like to thank Jim Weiner, Bonnie Webber and Barbara Grosz for their many useful insights into problems in semantic analysis, their comments on this paper, and their encouragement and support. References Bundy, et-al, “Solving Mechanics Problems Using Meta-Level Inference, Expert Systems in the Micro- Electronic Age,” Michie, D.(ed), Edinburgh University Press, Edinburgh, UK, 1979 Fillmore, C., The case for case, Universals in Linguistic Theory, Bach and Harms (eds.) New York; Holt, Rinehart and Winston, pp. l-88. 1980. Joshi, Aravind K., and Levy, Leon S., an expanded version of “Phrase Structure Trees Bear More Fruit Than You Would Have Thought,” 18th Annual Meeting of the Associ- ation for Computational Linguistics,” University of Pennsylvania, Philadelphia, PA, June, 1980. Kowalski, Robert, Logic for Problem-Solving, North- Holland Pub. Co., 1979. Levin, B. “Predicate-Argument Structures in English,” Master’s Thesis Proposal, MIT, 1977. Levin, B. “Instrumental With and the Control Relation in English,” MIT Master’s Thesis, MIT AI Memo 552, 1979. 1976. Palmer, M., “A case for Rule-Friven Semantic Process- ing,” ACL Conference Proceedings, Stanford University, 1981 Palmer,M., “Driving Semantics for a Limited Domain,” PhD Dissertation, Universtiy of Edinburgh, pending, 1983 Pereira, Fernando C.N. and David H.D. Warren. Definite clause grammars for language analysis- a survey of the formalism and a comparison with augmented transition networks. “Artificial Intelligence” 13:3( 1980), 231-278. Schank, Roger C., ed Conceptual Information Processing. Amsterdam: North Holland, 1975. Simmons, R.F., Semantic Networks: Their Computation and Use for Understanding English Sentences, Computer Models of Thought and Language, Schank and Colby (eds.) San Francisco: W.H. Freeman and Co., 1973. Woods, William A. Progress in natural language under- standing: An application to lunar geology. AFIPS Confer- ence Proceedings 42 (1973b), 441-450. Woods, William A., Procedural Semantics as a Theory of Meaning, in Elements of Discourse Understanding, (eds. Joshi, Sag, Webber), Cambridge U. Press, 1981, pp. 300- 334.
1983
23
215
INTERACTWE SCRIPT INSTANTIATION Michael J. Pazzani The MITRE Corporation Bedford, MA 01730 ABSTRACT The KNOBS [ENGELMAN 801 planning system is an experimental expert system which assists a user by instantiating a stereotypical solution to his problem. sm, the natural language understanding component of KNOBS, can engage in a dialog with the user to allow him to enter components of a plan or to ask questions about the contents of a database which describes the planning world. User input is processed with respect to several knowledge sources including word definitions, scripts which describe the relationships among the scenes of the problem solution, and four production system rule bases which determine the proper database access for answering questions, infer missing meaning elements, describe how to conduct a conversation, and monitor the topic of the conversation. SNUKA differs from GUS [BOBROW 771, a dialog system similar to SNUKA in its goals, in its use of a script to guide the conversation, interpret indirect answers to questions, determine the referents of nominals, perform inferences to answer the user's questions, and decide upon the order of asking questions of the user to maintain a coherent conversation. SNUKA differs from other script-based language understanders such as SAM [CULLINGFORD 781 and FRUMP [DEJONG 791 in its role as a conversational participant instead of a story understander. I INTRODUCTION In this paper, we wish to illustrate the knowledge sources necessary to participate in a type of natural language dialog. The type of dialog we wish to consider occurs when a person whom we will call "the planner" asks a person (or computer) whom we will call "the servant" to perform or arrange for the performance of a commonly occuring activity. Conversations of this sort are often held with travel agents, secretaries, military aids, building contractors, stockbrokers, etc. Most importantly, as the fields of knowledge based expert systems and natural language understanding emerge, these conversations will occur with a computer playing the part of the servant. The objective of this type of conversation is for the planner to describe to the servant the type of activity to be carried out, and to name the particular objects and persons participating in this activity. The servant is often capable of criticizing the planner's plan, or answering questions about objects and the roles they can play in the plan. We will call the processing the servant does in this type of conversation interactive script instantiation. We utilize scripts as the knowledge source for participating in these stereotypical conversations to contain and control the inference processes needed during understanding. Three situations which require inference during dialog understanding have been identified: the understanding of indirect answers to questions, the memory search required to answer questions and the recognition of intentions from questions or requests. In addition, the script can serve as the model of a plan which is being generated. The model is used to help conduct a coherent conversation. We have also introduced the notion of conversation rules to help the servant decide how to generate responses and take initiative in the conversation. We have implemented our ideas in a computer program called SNUKA, the natural language dialog component of the KNOBS system. II APPLICATION The KNOBS system supports a planner by checking the consistency of plan components, enumerating or ranking possible choices for plan components, or automatically generating a complete plan. KNOBS embodies a model of stereotypical planning, where a generic solution to the plan is known and the planner's task is to instantiate an individual plan solution with a consistent set of plan variable bindings. (The plan variables are represented as slots of FRL frames [ROBERTS 771.) The current application domains of KNOBS are the planning of an Air Force tactical air mission, the planning of a Naval "show the flag" mission and the scheduling of crew activities of the NASA space shuttle. A sample conversation with KNOBS demonstrates both its planning and its conversational capabilities: 320 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. 1. USER: 2. KNOBS: 3. USER: 4. KNOBS: 5. KNOBS: 6. USER: 7. KNOBS: 8. USER: 9. KNOBS: 10. USER: 11. KNOBS: 12. USER: 13. KNOBS: 14. USER: 15. KNOBS: 16. USER: 17. KNOBS: I want to send 4 aircraft from the 707th Fighter Wing to strike Boston between 1140 and noon. Which aircraft do you want to send? What aircraft can the wing supply? The 707th Fighter Wing has F-4s. Would you like to use F-4s for the aircraft? No, F-16s from the 606th Fighter Wing. The 606th Fighter Wing does not contain F-16s. Where are there F-16s? There are F-16s at Kennedy and LaGuardia. What wings at LaGuardia have them? The 808th Fighter Wing and the 909th Fighter Wing have F-16s. Fly them out of the 808th Fighter Wing at LaGuardia. What weapon system should the F-16s carry? Which are acceptable? The acceptable weapon systems are Dl, D2, and D4. Hit the target with D4 and plan the rest. The time over target is 1150. The time of departure is 1120. The call sign is PEACE 7. The transponder code is 5351. Sentences (7), (15) and (17) respectively illustrate the constraint checking, enumeration, and automatic planning aspects of the KNOBS system. (1) is an example of entering a range as a restriction for a plan variable value which is later refined by KNOBS. (In addition to dialog-driven planning, the KNOBS system provides a menu-based interface with these same capabilities.) SNUKA uses APE-II [PAZZANI 831, a conceptual dependency (CD) [SCHANK 721 parser, to produce a meaning representation of user input by consulting expectations from word definitions, expectations from scripts, expectations activated by system generated questions, and a database of object primitives [LEHNERT 78b]. Once the meaning of the user's utterance has been represented, an appropriate response is produced by means of a set of conversational rules. Simple rules state such things as that questions should be answered; more complex rules monitor the topics of the conversation and activate or deactivate expectations when the topic changes. A question is answered by invoking productions which are composed of three facets: a test, which is a CD pattern which typically binds variables to the major nominals in a question conceptualization; an action, which utilizes the referents of these nominals to produce a database query; and a response which is a template English sentence in which the answer and referents are inserted. (SNUKA currently does not include a natural language generation component.) This approach to question answering is discussed in further detail in [PAZZANI 831. The remainder of this paper will first describe some prior related work, and then discuss the method of understanding the user's commands and replies as well as the method of making inferences when required to interpret a user's query. III RELATED WORK GUS is a frame-driven dialog system which performs the role of a travel agent. It operates by instantiating a frame and sequentially finding fillers for the slots of this frame, directed by a specification found in its prototype. When this specification instructs GUS to ask the client a question, GUS activates an expectation which is a skeletal sentence in which the client's response can be inserted to handle ellipsis by forming a syntactically complete sentence. The client's response is expected to specify a value for this slot, although the client can gain some initiative by asking a question or supplying values for other (or additional) slots. 321 actions which occur in a context. This knowledge can be used to help answer the user's questions, to extract the bindings of plan variables (script roles) from the user's utterances, to determine the questions which must be asked of the user and the order in which they are asked, and to assist the parsing process in selecting word senses and pronominal referents when analyzing the user's input. A. Inference The question answering component of SNUKA can use the inference patterns of the script to respond to an extended range of questions. Because the KNOBS databases are static descriptions of the attributes of objects, question answering rules typically refer to states. Many questions, however, refer to actions which are enabled by states described in the database. If a question answering question answering rule cannot be found to answer a question, and the question refers to a scene in an active script, causal inferences are used to find an answerable question which can be constructed as a state or action implied by the original question. For example, in (3), the user asks a question for which there is no question answering rule. Sentence (3) is then identified as a scene 3. USER: What aircraft can the wing supply? in $OCA, the Air Force mission script: the transferring of control of aircraft from its wing. The action referred to by this question is enabled by a state, that the wing contains the aircraft. A new "question" is constructed by substituting the script role bindings into this state: "What aircraft does the wing contain?".. The script patterns required to make this inference are illustrated in Figure 1. (DEF-SCRIPT-PATTERN NAME WING-SUPPLY-AIRCRAFT SCRIPT $OCA PATTERN ("ATRAN OBJECT &OCA:AIRCRAFT FROM &OCA:WING) ENABLED-BY WING-HAVE-AIRCRAFT BEFORE AIRCRAFT-FLY-TO-TARGET) (DEF-SCRIPT-PATTERN NAME WING-HAVE-AIRCRAFT SCRIPT $OCA PATTERN (;'EQUIV* CONA &OCA:AIRCRAFT CONB (PART OF &OCA:WING)) ENABLES WING-SUPPLY-AIRCRAFT) Fieure 1. Definitions of Script Patterns These script patterns are used in responding to question (3). The first step is to recognize that (3) refers to the script scene WING-SUPPLY-AIRCRAFT of the $OCA script, This is accomplished by matching script patterns from the currently active script against the question. (me $OCA script was activated by (1)). Once the scene referred to by the question is identified, the causal links of the script are traversed to find a scene which enables the scene referred to by the question. In this case, the scene WING-HAVE-AIRCRAFT is the enabling scene. This scene is instantiated with the bindings of script variables found in the question. The result of this instantiation is a question conceptualization which can be answered by a question-answering production. In some instances, more than one scene in the script enables a scene referred to by the user's question. In this case, the answer to the question can be found by intersecting the answers to questions constructed from each of the enabling scenes. In the $OCA script, this occurs in answering a question such as "What aircraft at Logan can strike the target?". The states which enable this are that the aircraft be able to reach the target and that the aircraft be suitable for the type of target. In effect, the question that is answered is "What aircraft at Logan which can reach the target from Logan are suitable for the target?". Because scripts contain the necessary inferences, the search for all enabling conditions is very efficient. In contrast, to answer this question by using a deductive database retrieval, such as that used by ACE, may require finding a large number of enabling conditions, many of which are not germane to this problem (e.g., that the aircraft have landing gear). Another type of inference must be made to fill in missing meaning elements when the meaning of an utterance is not complete. These "conceptual completion" inferences are expressed as rules organized by the role of the meaning element which they are intended to explicate. For example, when processing the question "What aircraft at Kennedy can reach the target?", the source of the physical transfer is not explicitly specified, but must be inferred from the initial location of the object. The initial meaning representation for this question is displayed in Figure 2a. Note that the FROM role is not filled. Conceptual-completion inferences are run only when required, i.e., when needed by the conceptual pattern matcher to enable a question-answering pattern, a script pattern, or even another conceptual completion pattern to match successfully. Figure 2b and Figure 2c illustrate the question production and the conceptual completion inference pattern needed to answer the above question. The question answering production in Figure 2b would be sufficient to answer the question "What aircraft at Kennedy can reach the target from Kennedy?". If there is no filler of the FROM role of a question, it can be inferred by the conceptual completion inference in Figure 2c. This inference binds the pattern variable &OBJECT to the filler of the OBJECT role and executes the function FIND-LOCATION which can check the database for the known location of the referent of &OBJECT or, as in this example, examine the LOCATION role of &OBJECT. B. Understanding Requests It is also necessary to reference the domain knowledge represented in scripts while interpreting the user's commands. For example, 322 (-r;PTRANS* OBJECT (AIRCRAFT LOCATION (KENNEDY) IS-A ("?':)) TO (TARGET REF (*DEF*)) FROM (NIL) MODE (*POTENTIAL*)) Figure 2a. The meaning representation aircraft at Kennedy can reach the target". of "What (DEF-QUESTION-PRODUCTION SCRIPT $OCA PATTERN ("PTRANS" OBJECT &AIRCRAFT FROM &SOURCE TO &DESTINATION) Q-FOCUS (OBJECT IS-A) ACTION <lisp code to compute answer> RESPONSE <lisp code to print answer>) Figure 2b. A Question Answering Production. (DEF-COMPLETION-INFERENCE SCRIPT DEFAULT PATTERN (+PTRANS" OBJECT &OBJECT) INFERENCE (FROM) ACTION (FIND-LOCATION &OBJECT)) Figure 2c. Conceptual Completion Pattern. (1) refers to two scenes of the $OCA script, and yields bindings for several script roles: 1. USER: I want to send 4 aircraft from the 707th Fighter Wing to strike Boston between 1140 and noon. TARGET to Boston, AIRCRAFT-NUMBER to 4, WING to 707th Fighter Wing and TIME-OVER-TARGET to the time range from 1140 to 1200. This sentence, creates an instance of the $OCA script. In SNUKA, when this occurs the script role bindings are passed to KNOBS to be checked for consistency. The process of identifying the script roles mentioned in an utterance is a method of understanding the user's indirect answers to questions. In (13), KNOBS asks the user a question which he answers in the first part of (16) for this This question was generated to find a value the weapon-system Role of the script. When question is asked, an expectation is 13. 16. KNOBS: What weapon system should the F-16s carry? USER: Hit the target with D4 . . . activated to assist in the understanding of an elided response as well as in the selection of intended word senses. (This type of expectation is discussed in further detail in section IV D.) This expectation consists of a template meaning structure derived from the script scene referred to by the question. The indirect answer is understood by noticing that the user has accomplished the binding of the script role by means other those expected. c. Question Ordering An important part of participating in a coherent conversation is the selection of a reasonable order for questions to be asked. The question ordering approach used by SNUKA is similar to that of DM in that there are usually several potential questions which can be asked. SNUKA differs from DM, however, in that its rules, rather than being domain specific, are generic rules operating on a knowledge structure which represents the domain. The rules used by SNUKA encode the observations of some research [GROSZ 771 [HIRST 811 that the structure of a conversation is reflected by the structure of the plans and goals of the participants. Those questions which clarify a previous utterance or establish the goals of the user are considered most important, followed by those which continue a topic brought up by the user, followed by those questions which refer to the scene which temporally follows the scene referenced by the last utterance. A question which SNUKA asks is considered to continue a topic when the user's last utterance refers to a scene of a script, some script roles are not specified, and the question asks for a value for one of these roles. For example, if initially given a request such as "Send 4 F-4C's to the target", a question which is considered to continue the topic is llFrom which airbase should the F-4C's fly?". The above discussion of question ordering focused on the topic of the question to ask. In addition, the question ordering rules of SNUKA can help to choose the form of the question. For example, (12) is a request which refers to the script scene of the departing of aircraft from a target The topic of weapon systems for the next question (13) is decided upon because the next script scene is the carrying of a weapon system by the aircraft. This decision also yields the meaning representation of the question to be asked. D. Ellipsis To interpret ellipsis, SNUKA sets up expectations when it produces a question. These differ from the expectations of GUS in that they encode the meaning of the anticipated reply, instead of the its lexical expression. This enables SNUKA to accept a wider class of ellipses. For example, in (6) the ellipsis can not be 5. KNOBS: Would you like to use F-4s for the aircraft? 6. USER: No, F-16s from the 606th Fighter Wing. inserted into a template English sentence because additional information must replace part of the expected response. This is an example of both replacement and expansion ellipsis [Wieschedel 821. The expectation activated when question (5) is asked is shown in Figure 3. This expectation sets up a context for understanding several types of ellipsis. The simplest response anticipated is "Yes." or "No." and SNUKA will expand these into complete conceptualizations which could be expressed as 'tI 323 want F-4 to be used for the aircraft of mission OCAlOOl." or "I do not want F-4 to be used for the the aircraft of mission OCAlOOl.". This expansion fills a role of the concept found in the CONTEXT facet of the expectation with the concept produced by the user's actual reply. The role to be filled (in this example, it is the MODE role) is stored in the RESPONSE-FOCUS facet of the expectation. In addition, this expectation helps in forming a complete conceptualization in the case that user replies ((Use F-4s.", "I want to use F-4s.", "I want F-4s to be used.n or "I'd like to employ F-4s.". A wide variety of replies can be understood by the same expectation because SNUKA's expectations are based on the anticipated meaning of the reply, as opposed to a syntactic or lexical representation of this meaning. The processing of (6) requires the same type of expansion of an elided response to a full conceptualization, with the added complexity of the substitution a subconcept of the expected response with a concept from the actual response. (EXPECTATION CONVERSATION-FOCUS (AIRCRAFT) CONTEXT (J:GOAL+: ACTOR ("USER") GOAL (%JSE": OBJECT (F-4) FOR (&OCA:AIRCRAFT) OF (OCAlOOl))) RESPONSE-FOCUS (MODE)) Figure 3. A Response Expectation. SNUKA must monitor the expectations activated when it asks a question in order to deactivate those that are no longer appropriate. In the simplest case, an expectation is deactivated when it has been satisfied. The expectation can be satisfied when it is used in the expansion of an ellipsis or when the user replies with a full conceptualization which is equivalent to that which would be produced by the expansion of an ellipsis. In the case that the CONTEXT of the expectation is a GOAL, the expectation can be deactivated when the goal is fulfilled by any means. The expectation in Figure 3 is satisfied when an aircraft is selected for the mission. This could be accomplished by a command posed in English, or by a value chosen from a menu or inserted into a form. Expectations must also be deactivated when the topic of conversation changes, i.e., the user ignores SNUKA's question. The determining of the topic of an utterance has not been earnestly approached in this work. Instead, SNUKA uses a set of simple heuristics to determine if the user has changed the topic after it asks a question. One such rule states that the topic has not changed if a concept in the user's utterance is a member of the same class as the concept expected to be the answer to a question. This rule uses the CONVERSATION-FOCUS of the expectation. SNUKA utilizes two kinds of expectations to interpret ellipses. The expectation in Figure 3 is called a "meaning template" expectation because it can be used as a template in which the meaning of the user's reply can be inserted, as well as a source for fillers to complete the user's reply. A weaker form of expectation is a tlmeaning completion" expectation. This type of expectation can only be used to complete the meaning of a user's reply. During the course of a conversation, a meaning template expectation can become a meaning completion expectation. For example, when SNUKA asks question (13) it sets up 13. KNOBS: What weapon system should the F-16s carry? 14. USER: Which are acceptable? 15. KNOBS: The acceptable weapon systems are Dl, D2, and D4. 16. USER: Hit the target with D4 and plan the rest. a meaning template expectation to handle an elided reply such as "D4." or "The lightest acceptable weapon system.II instead of (14). When the user replies with a question (14), this expectation is changed to a meaning completion expectation. (This is accomplished by removing the RESPONSE-FOCUS facet of the expectation). This is done because enter, for it is not example, reasonable for the user to "D4 . " instead of (16). However, if the user enters "Carry D4." instead of (16) it is necessary to have the CONTEXT of the expectation available for completing the meaning of this ellipsis. E. Conversational Rules The top level control structure of the response module of SNUKA is implemented as a set of demons which monitor the assignment of values to state variables. Rieger calls this type of control structure spontaneous computation [RIEGER 781. These demons are rules which describe how to conduct a conversation. The process of responding to the user's utterance is initialized by setting the variable "CONCEPTS; to the conceptualization produced by the parser. This can trigger any of a number of permanent or temporary demons. In the case that the *CONCEPTj; is a question with a complete conceptualization, a demon is triggered which invokes the question answering program and sets the state variables 9c;ANSmR" to the answer conceptualization of the question, and the variable *UNDERSTOOD* to the question conceptualization. The handling of ellipsis was added to SNUKA by activating temporary rules which monitor the setting of the variable "CONCEPP and expand an incomplete concept (i.e., one pcoduced when parsing ellipsis) to a complete concept before the *CONCEPw is processed for question answering or responding to requests. These temporary rules are activated when SNUKA generates a question and deactivated by other temporary rules which monitor ellipsis processing or monitor *UNDERSTOOD* for changes in iopic. In addition to instructing SNUKA on how to respond to the user's input, there are rules which allow SNUKA to take some initiative to assist the user in accomplishing his goals. One such rule instructs SNUKA to ask the user if he would like to use a particular value for a script role, in the case SNUKA asked the user to specify a value (DEF-CONVERSE-RULE NAME MONITOR-ANSWER VARIABLE "UNDERSTOOD* TEST (AND (IS-QUESTION? $"CONCEPp) (UNIQUE? $*ANSWER") (EXPECTED-QUESTION? $f:CONCEPT;'; $J:ANSWER" $*EXPECTED-ANSWER*) (ACCEPTABLE? $‘?'ANSWER* $%CRIPT-ROLE" $JSCRIPw)) ACTION (ASK-USE-Y/N $J:ANSWER* $%CRIPT-ROLE" $%CRIPTJ:))) Figure 4. A Conversation Rule for this role, and he replied with a question whose answer is a unique acceptable value fo; this role. Question (5) is generated by this rule (see Figure 4.). The rule in Figure 4 would be activated by generating questions to ask the user. The following hypothetical conversation illustrates responses which must make use of the user's intentions: SNUKA after producing question (2). In addition, SNUKA sets the state variables %CRIPT?;- to the 2. KNOBS: 3. USER: 4. KNOBS: 5. KNOBS: Which aircraft do you want to send? What aircraft can the wing supply? The 707th Fighter Wing has F-4s. Would you like to use F-4s for the aircraft? 101. USER: I want to strike Boston at with 4 aircraft carrying D2 10:30 102. KNOBS: Which aircraft do you want to send? 103. USER: What aircraft does Kennedy have? 104. KNOBS: The aircraft at Kennedy carry D2 are F-16's. which can current script, *SCRIPT-ROLE* to the script role mentioned in question (2), and J:EXPECTED-ANSWER* to a meaning template expectation similar to the one in Figure 3. (SNUKA also sets up a demon which also uses *EXPECTED-ANSWER;? to interpret a possible elided response.) The demon, MONITOR-ANSWER, is executed after answering question (3), when %JNDERSTOODJ: is set to the question conceptualization. In this example, the test evaluates to TRUE, and the action procedure, ASK-USE-Y/N, produces (Sj and sets up a context to understand the user's reply. 105. KNOBS: Would you like to send 4 F-16's from Kennedy? If question (103) were taken literally, an appropriate response would be a listing of all the aircraft at Kennedy including those which are not appropriate for this type of mission, those which cannot carry the weapon system D2, and those which are assigned to other missions. By recognizing the user's plan a more intelligent response such as (104) can be produced. In addition, question (105) can be generated to ask the user to confirm both an aircraft and an airbase to supply the aircraft. The conversation rule illustrated in Figure 4 would not be able to ask about the airbase because it does not utilize the user's intentions. V FUTURE WORK Future extensions to include the incorporation of a natural language generation program and the incorporation of graphical input and output devices. Some questions might best be answered by a table or graph instead of an English reply. A generalization of the conversational rule in Figure 4. could display a menu for the user to select his choice in the case that he asks Allen [ALLEN 821 has shown how possible plans can be inferred from an utterance. This inference process chains forward from the logical form of an utterance and backward from a set of possible a question whose answer is a group of objects. This would be an appropriate response to (14). goals. The most important aspect of intelligently In conversation about stereotypical situations scripts can provide an efficient mechanism to infer plans and goals from utterances. To be fully understood an utterance must be related to the context in which it was produced. The identification of the script scene and the script roles referenced by an utterance can help to recognize the plans and goals it was intended to serve. The conditions which a helpful, intelligent response must meet can be discovered as part of the memory process necessary to understand an utterance. In this manner, participating in a conversation is the recognition of goals and plans of the other participant. These goals and plans must be inferred from the participant plans are ‘S utterances. known, they Once , the goals must be taken i and nto consideration when generating responses. In SNUKA, we expect to make use of the user's answering his questions and when intentions when responding to opportunistic utterance can be przcess [McGUIRE 821. considered an For example, 325 question (103) refers to the script scene AIRCRAFT-AT-AIRBASE with the AIRBASE role identified as Kennedy. By infering that the user intends to use Kennedy for the AIRBASE, and an aircraft at Kennedy for the AIRCRAFT in his plan, a better answer can be produced for his question. The aircraft which can be used are those which are compatible with the proposed binding of the AIRBASE role in addition to the binding of other script roles established by (101). The conditions which potential bindings for the aircraft must meet are represented by the enabling state of the script. VI CONCLUSION SNUKA integrates a number of knowledge sources to conduct a conversation in a stereotypical domain. The most important of these knowledge structures, the script, is also the one which most limits its applicability (to conversations about stereotypical situations). However, this script based approach is most appropriate as an interface to the KNOBS system which assists a planner by instantiating a stereotypical solution to his problem. SNUKA has demonstrated a method of conducting a conversation utilizing the domain knowledge represented in scripts, knowledge which had previously been applied to story understanding. ACKNOWLEDGMENTS This work was supported by USAF Electronics System Division under Air Force contract F19628-82-C-0001 and monitored by the Rome Air Development Center. Special thanks are due Sharon Walter and Dr. Northrup Fowler, III. I would also like to thank Carl Engelman, Bud Frawley, Richard Brown, Frank Jernigan, and Max Bacon for their comments on this work. Captain Louis Albino provided valuable assistance during the development of SNUKA. REFERENCES [ALLEN 821 Allen, J., Frisch, A., Litman, D. "ARGOT: The Rochester Dialogue System", Proceedings of the National Conference on Artificial Intelligence, Pittsburg, 1982. [BOBROW 771 Bobrow, D.G., Kaplan, R., Kay, M., Norman, D., Thompson, H.S., and Winograd T "GUS, a Frame-driven Dialog System", Ariificial Intelligence, g(2), April 1977. [CHARNIAK 801 Charniak, E., Riesbeck, C. and McDermott, D., Artificial Intelligence Programming. Erlbaum Press. Hillsdale, NJ, 1980. [CULLINGFORD 781 Cullingford, R., "Script Application: Computer Understanding of Newspaper Stories", Research Report 116, Department of Computer Science, Yale University, 1978. [CULLINGFORD 821 Cullingford, R., 'ACE: An Academic Counseling Experiment",EE&CS Department TR-82-12A, University of Connecticut, 1982. [DEJONG 791 DeJong, G., "Skimming Stories in Keal Time: An Experiment in Integrated Understanding", Research Report 158, Department of Computer Science, Yale University, 1979. [ENGELMAN 801 Engelman, C., Scarl, E., and Berg, C "Interactive Frame Instantiation", PrAceedings of the First Annual Conference on Artificial Intelligence, Stanford, 1980. [GROSZ 771 Grosz, B., "The Representation and Use of Focus in Dialog Understanding", PhD dissertation, University of California at Berkeley, 1977. [HIRST 811 HIRST, G., 'Discourse-oriented Anaphora Resolution in Natural Language Understanding: A Review', American Journal of Computational Linguistics, z(2), June 1981. [LEHNERT 78a] Lehnert, W., The Process of Question Answering, Lawrence Erlbaum Associates, Hillsdale, NJ, 1978. [LEHNERT 78b] Lehnert, W., 'Representing Physical Objects in Memory", Research Report 1978, Department of Cpmputer Science, Yale University, 1979. [McGUIRE 811 McGuire, R., Birnbaum, L., and Flowers, M. "Opportunisitc Processing in Arguments", Proceedings of the Seventh IJCAI, Vancouver, B.C., 1981. [PAZZANI 831 Pazzani, M., and Engelman, C., 'Knowledge Based Question Answering", Proceedings of the Conference on Applied Natural Language Processing, Santa Monica, 1983‘. [RIEGER 781 Rieger, C., "Spontaneous Computation in Cognitive Models", Cognitive Science, I(3), 1978. [ROBERTS 771 Roberts, R., and Goldstein, I., 'The FRL Manual, " MIT AI Lab. Memo 409, 1977. [SCHANK 721 Schank, R., 'Conceptual Dependency: A Theory of Natural Language Understanding', Cognitive Psychology, z(4). 1972. [SCHANK 771 Schank, R. and Abelson, H., Scripts, Plans, Goals, and Understanding, Lawrence Erlbaum Associates, Hillsdale NJ, 1977. [STEINBERG 801 Steinberg, L., "Question Ordering in Mixed Initiative Program Specification Dialog", Proceedings of the First Annual Conference on Artificial Intelligence, Stanford, 1980. [WEISCHEDEL 821 Weischedel, R. and Sondheimer, N., "An Improved Heuristic for Ellipsis Processing," Proceedings of the 20th Annual Meeting of the Association for Computation Linguistics, Toronto, 1982. 326
1983
24
216
DETERMINISTIC AND BOTTOM-UP PARSING IN PROLOG Edward P. Stabler, Jr. University of Western Ontario London, ranada It is well kncwn that top-dcwn backtracking context free parsers are easy towrite in Prolog, and that these parsers can be extended to give them the per of ATN's. This report shows that a number of other familiar parser designs can be very naturally implemented in Prolog. The top-dotJn parsers can easily be constrained to do deterministic parsing of m(k) languages. Bottcm-up backtrack parsers can also be elegantly implemented and similarly constrained to do deterministic m(k) parsing. Very natural extensions of these I&(k) parser designs suffice for deterministic parsing of natural languages of the sort carried out by the Marcus(1980) parser. I gJT!ROlXKYION Pereira and Warren(1980) have shcwn that Prolog provides facilities for writing topdmn backtracking context free parsers.* For example, a simple context free grammar like (CFGl) is easily converted into the Prolog code for the corresponding parser (CFPl): (CEw s --> np vp det --> your w --> det n n --> claim np --> det n rel ComP --> that rel --> camp s v --> is VP --> v adj adj --> funny (C=w s([s,NP,VP],P0,P):-np(NP,P0,Pl),vp(VP,Pl,P). np([np,Det,N],PB,P):-det(Det,P0,Pl),n(N,Pl,P)* np([np,Det,N,Pel],P0,P):-det(Det,P0,Pl),n(N,Pl,P2), rel(Rel,P2,P). rel([rel,Ccmp,S],P0,P):-comp(Cc-mp,P0,Pl),s(S,Pl,P). vp([vp,V,Adj] ,PB,P):-v(V,P0,Pl),adj(Adj,Pl,P). det([det,yourl,[yourIT],T). n([n,claim] ,[claim]T],T) . ccmp([comp,that],[thatlT],T). v( [v,isl I [is ITI 2) . adj ( Wj I funny1 , C funny I Tl J) - ------------------- * The slight acquaintance with Prolog required for a complete understanding of this report will just be presumed. Prolog is coming to be fairly well kncwn, and there are good introductions to the language (Clocksin and Mellish, 1981). In the parser, the category symbols 2, a, yp and so on, are treated as three place predicates, where the first argument holds a labelled brackettiny representing the derivation tree dominated by that category symbol, andwhere the string to be parsec' is given in what remains of the list in the second argument when the list in the third argument place is taken off its tail. So given (CFPl), the Prolog query I s(P,[your,claim,is,funny],[]). is a request to parse a string generated by the grammar. The query will succeed and return the parse tree, P= [s, hp, WeWoW , b,claU I, [VP, I% isI r Wj d-y1 11 Since Prolog tries the rules of the grammar in order, itwill first try to get a successful parse with the first rule for a, and if this attempt fails (as itwill for any m containing a a, the parser will backtrack and the second rule will be tried. This kind of parser has all of the problems of standard context free top-dcwn backtrack parsers: left recursive rules can cause looping; the language generated by a context free grammar will not generally be the language that is parsable by the corresponding parser (evenwhen the parser does not loop); and, like any backtracking parser, this kind of prser can be quite inefficient (Aho and Ullman, 1972). Pereira and Warren(1980) have pointed out that we can increase the power of such a Prolog parser as much as we like by elaborating the parser rules. It is not hard, for example, to elaborate the rules of such a parser to produce a parser that is equivalent to an ATN. A number of projects have used elaborations of Prolog top-dcwn context free parsing capabilities with some success (Pereira and Warren, 1980), but since other parser designs have particular advantages, it is of interest to consider whether Prolog is a good implementation language for them. Deterministic parsers can be substantially more efficient than backtrack parsers, and there is some reason to think that deterministic bottcm-up parser designs are particularly well suited for natural languages (Marcus, 1980) . This report will show hew these parser designs can be implemented very naturally in Prolog. 383 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. II lidlLLKlpARsING As was noted, the simple parser presented above will have to backtrack to parse any m containing a &l, but it is clear that the parser could always avoid this backtracking if it could look ahead 3 symbols: All m's will begin with a &t and a T]L, and only rip's containing a ti will have a mp after the a. It is easy to elaborate (CFPl) so that it performs this lookahead to determin- istically parse the language generated by (CFGl) as an U(3) language. We need only reverse the two rules for w so that the rule for the ccnnplements will be considered first, and chang that rule so that itwill apply only if the B "that" is the thirdwordwaiting to be parsed: (CFP2) np([np,Det,N,Rel],[Wl,W2,that]T],P):- !,det(Det,[Wl,W2,that]T],Pl), n(WLP2) I rel(Rel,P2,P). np([np,Det,N],PB,P):-!,det(Det,P0,Pl), n(N,Pl,P) .* Obviously, we could use this technique to perform any test we like on the first k symbols of the string remaining to be wrsed, so this sort of implementation will always suffice for LL(k) parsing. Notice that the Prolog representation of the function from the first k symbols of the string to the parser rule to be used is quite perspicuous. III EEEZHJPPARSING Bottom-up parsers for context free languages are quite popular. The simplest and best-kncwn are the "shift-reduce" parsers which "shift" a symbol from the input stream into a stack and then try to "reduce" the structures at the top of the stack to higher level structures using the grammar rules; when no more reductions of the symbols at the top of the stack are possible, another symbol is shifted frcm the input stream, and so on until the input stream is empty (or until sOme final punctuation is reached) and the stack contains only a sentence structure. The follcwing Prolog parser for (CEGl) neatly captures this design: (CFP3) start([IIXl,P):-parse(X,[I],P). parse([IIX],Stack,P):-reduce(Stack,Nazlstack), parse(X,[IINewstack],P) . parse([],Stack,[slS]):-reduce(Stack,[[s]S]]). reduce([X]Y],[X2]Y2]):- reducel([XlY],[XllYl]), reduce([XlIYl],[X2lY2]). reduce(X,X). -------------------- * The Prolog cut symbol, "!", blocks bactracking. Since this parser is deterministic, wewant to begin every rulewith a cut. This will have the desired effect of causing the parser to abandon any parse which tries to use a rule and fails. reducel( [your IX] ,I. [det,yourl IX]). reducel([claimlX],[[n,claim]IX]). reducel([[n,N],[det,Det] IX], [[npJdet,Detl, bINI 1 IWL reducel([thatlX],[[comp,that]]X]). reducel( [ [s IS1 v kcqM~p1 IX1 , [ [rel, k~~?~~pl~ is IS1 1 IX1 1. reducel([~rellRel],[n,N],[det,Det]lX], [ b-q?, [deWetlI [n,Nl , be1 Dell 1 IX1 1. =duceU [ MC-WI h?lW IX], [ [s, bplNPII [wlW 1 IX1 1. reducel(] [adjAdjl,[v,Vl IX], [ [VP, b,Vl I Wj dWl1 IX1 1. reducel([islX],[[v,is]lX]). reducel([funnylX],[[adj,funny]lXI). The predicate start takes a list containing the string to be parsed (the input buffer) as its first argument, andwill return the labelled bracketting for that string as its second argument. The predicate w keeps the remaining input as its first argument, the active node stack as its second argument, andwill return the finished labelled bracketting as its third argument. The parsing begins when start puts the first symbol of the input string into the active node stack and calls the procedure parse. Parse reduces the stack and then shifts the next input symbol onto the stack, until the input buffer ic L) empty and the active node stack contains just one structure dominated by the sentence category s!. The predicate Educe simply calls the reducel rules until they no longer apply, and the reduce1 rules simply match their first arguments with structures in the stack corresponding to the right-hand sides of (CEw grammar rules and return as their second argument a structure dominated by the category on the left hand side of the grammar rule. Thus, given the query r start([your,claim,is,funny],P). this parser will shift "your" onto the stack, and the first reduce1 rule will apply to reduce "your" to the structure "[det,your]". Another symbol is then shifted fram the input onto the stack, and so on until the stack contains the same labelled bracketting that (CFPl) produces for the string. Notice that (CFP3) is a backtracking shift-reduce parser; it will have to backtrack whenever it parses a string containing a complement. This is so because whenever it gets? noun structure and a determiner structure at the top of the stack, it will reduce them to a nounphrase structure regardless of whether a camplement is waiting in the input buffer to be parsed next. This backtracking can be eliminated, though, with one symbol of lookahead. We simply need to constrain the &ucel rule that performs the m reduction so that itwill not apply if there is a camp at the head of the input buffer. This requires that the reduce1 rules apply not only according towhat is at the top of the stack but also according to what is in the input buffer; the nm reduce1 rules accordingly have the input buffer 384 as their first argument. Since every && 0 sequence allcwed by (CEGl) is followed either by the a "that" or the y "is", the easiest way to avoid backtracking is simply to allow the simple noun phrase reduction (i.e., the application of the third redtucel rule) only when the symbol "is" is at the head of the input buffer: (CFP4) start([IlX],P):-parse(X,[I],P). parse([IIX],Stack,P) :-reduce([IIX],Stack,Newstack), !,parse(X,[I]Newstack] ,P) . parse([],Stack,[slS]):-reduce(Input,Stack,[[slS]])~ reduce(Input,[X]Y],[X2lY2]):- reducel(Input, [XIY], [XlIYl]),!, reduce(Input,[XllYl],[X21Y2]). reduce(Input,X,X). reducel(Input,[your]X],[[det,your]]X]). reducel(Input,[claim]X],[[n,claim]lX])~ reducel([islI],[[n,l'J],[det,Det]/X], [ bp, [det,Detl I b,N 1 IX1 1. reducel(Input,[that]X],[[comp,that]/XI). reducel(Input,[[slS],[comp,Ccmp] IX], HreL[ccw,C~pl IMSll IXI). reducel(Input,[[rel]Rel],[n,N],[det,Det] IX], [ b-q?, Wet,Wtl I [n,Nl I Ire1 IRe111 IX1 IO reducel(InpuL [ [vpl~lr [npbJPl IX1 r [ [s, [nplNPl, WplW 1 IX1 1 l reducel(Input, [ [adj,Adjl , [v,Yl 1x1, [ [VP, bMl, Wj JWI 1 IX1 1 e reducel(Input,[is]X],[[v,is]lX]). reducel(Input,[funny]X],[[adj,funny]]X]). This is a deterministic LR(l) parser for (CFGl). Obviously, we could allow our &cel. rules to perform whatever computations we wanted on the first k symbols of the Input and on the Stack, so this style of implementationwill suffice for any U?(k) parsing. Again, the representation of the function fran the first k symbols of the input and the active node stack to the parser rule to be used is perspicuous; it is equivalent to the standard LR table representations. v ?HEMARerJSPARSER Marcus(1980) shcxrJed hew E:e basic design of an LR(k) parser can be extended to give it sufficient power to parse English in a very I:atural way. Most importantly: we allow arbitrary tests on the first k cells of the input buffer (in Narcus, 1980, k=5); we allcw the parser to put parse structures into the cells of the input buffer as well as into the active node stack; we allow the parser to look arbitrarily deep into the active node stack; and we group the parser rules into packets in such a way that only rules frcm active packets can be executed, and these packets are activated and deactivated by the parser rules. The rules of the Marcus parser arewritten in a language (Pidgin) which is compiled into Lisp. Fach grammar rule has the form: [testl][test2][test3][**c;test4] -> action where test1 is a test performed on the structure in the first buffer cell, similarly for test2 and test3, and test4 is a test on the active node stack. In our Prolog implementation, corresponding to each of Elarcus's grmnmar rules there is a parser rule of the form:* reduce([First,Second,Third]Restbuffer], [[Packet,Activenode] IReststack],Counter,P):- member(name,Packet), testl(First), test2(Second), test3(Third), test4([[Packet,Activenode]lReststack]), reduce(Newbuffer,Nmstack,Nmcounter,P). This form is appropriate since the active packets are associated with each node in the active node stack, so the member clause checks to make sure that the rule me is in the active packet; the tests on the first three buffer cells and the stack are done next, and if they succeed, the action consisting of appropriately modifying the input buffer and the stack is carried out, and reduce is called again. The counter is used simply to number the nodes as they are created and pushed onto the active node stack. The last argument of reduce is the completed parse which will be in the active node stack when final punctuation at the end of a grammatical string is picked up from the buffer. The similarity between Marcus's grammar rules and our prser rules is easy to see. Consider for example, the following rules from Marcus(1980) and our corresponding Prolog rules: (rule MAJOR-DECL-S in ss-start [=np][=verb]-> Label c s,decl,major. Deactivate ss-start.Activate parse-s&j.) (rule YES-NO-Q in ss-start [=auxverb][=np]-> Label c s,quest,ynquest,major. Deactivate ss-start.Activate parse-subj.) /*MZUOR-DECL IN ss-STMT*/ reduce([[First]Treel],[Second]Tree2]]Restbuffer], [[Packet,[Cat]Tree]] IReststack],Counter,P):- remove-member(ss-start,Packet,NeiJpacket), member(np,First), member(verb,Second),!, append(Cat,[s,decl,maj] ,NenJcat) , reduce([[First]Treel],[Second]Tree2]] Restbuffer], [[[parse-subj INewpacket],[Newcatll I Reststack], Counter,P). --a----------------- * Actually, in our implementation we use additional argument places in the reduce predicate to handle Marcus's "attention shift" rules, which are used in nounphrase parsing, and also for holding case structures. These rules are beyond the scope of this discussion, but they do not affect the basic ideas presented here. 385 /*YESJQ-Q IN SS-START*/ reduce([[First]Treel],[Second]Tree2]]Restbuffer], [[Packet,[Cat(Tree]J ]ReststackJ,Counter,P):- remove_Tnember(ss-start,Packet,New~cket), member(auxverb,First), member(np,Second),!, append(Cat,[s,quest,ynquest,maj] ,Newcat) , reduce([[First]Treel],[Second]Tree2]] Restbuffer], [[[parse-subj INew~cket],[Newcat]] I Reststack], Counter,P). Our parser, like Marcus's, associates a list of features with each node in the tree structures, so that the tests on the buffer cells are simple list membership tests. If they succeed, the rule is run and the new state of the parser is defined in the call of the reduce procedure. No rule that runs will fail unless the input string is unacceptable, backtracking over Eocked. calls to redue can be The parser operation is easy to follow, and the rules are easy to write. Our current Prolog implementation of the Marcus parser is not particularly fast, but it should be extendable to get greater coverage of the language without the substantial increases in parsing time that would be expected with extensions of a strictly top-dmn parser. The follcwing strings were parsed by our campiled DEC-10 Prolog implementation in the times indicated: [john,has,scheduled,a,meeting,for,wednesday] P= [s (1) , [w(2) ,joW I bux(3 I b--f(4) has1 1 I [vp(5 ),[verb(6),scheduled],[np(7),a meeting],[pp(8),for wednesday]],[finalpunc,.]] time=70ms [the,meeting,has,been,scheduled,for,wednesday] P= [s (1) , b-q(2) ,ee meeting1 I [aW3 , [perf (4) d-ml ,[passive(5),been]l,[vp(6),[verb(7),scheduled],[np( 8) ,trace boundto([the,meeting])],[pp(9),for wednesd ayl 1 I [ finalpmc, l 11 time=84ms [the,meeting,seems,to,have,been,schedu1ed,for,wedne sdayl P= is (1) I bv(2) &he meeting1 I bW3) II [VP(~) I [verb (5) ,seems],[np(l6),[~(6),[np(7) ,trace boundto([the, meeting])],[aux(8),[to(9),to],[perf(l0),have],[~ss ive(ll),beenJ],[vp(l2),[verb(l3),sched~ed],[np(l4) ,trace boundto([boundto([the,meeting])])],[pp(l5),f or wednesday]]]],[finalpunc,.ll time=145rrs [II [21 131 [41 Aho, A.V. and J.D. IJl1ma.n (1972) Z&!z m af PaTsing, TrranslationL&~Volume;b; Parsina. Englmood Cliffs, M: Prentice-Hall. Clocksin, W.F. and C.S. Mellish (1981) RRFFRENCES Programming in &o.Ug. NW York: Springer-Verlag. Marcus, M. (1980) A L&%zy ad Sv'ntactic w fhr. N&LX& l&nguzge. Cambridge, MA: MIT Press. Pereira, F.C.N. and D.H.D. Warren (1980) Definite clause grammars for natural languages. Artificial ~telliuence, U., pp.231-278.
1983
25
217
MCHART: A FLEXIBLE, MODULAR CHART PARSING SYSTEM HENRY THOMPSON Department of Artificial Intelligence University of Edinburgh Hope Park Square, Meadow Lane Edinburgh EH8 9NW ABSTRACT One of the most attractive properties of the active chart parsing methodology (Kay 1980; Thompson and Ritchie 1983) is the distinction it makes possible between essential bookkeeping mechanisms, scheduling issues, and details of grammatical formalisms. MCHART is a framework within which active chart parsing systems can be constructed. It provides the essential book- keeping mechanisms, and carefully structured interfaces for the specification of scheduling and grammatical formalism. The resulting flexibility makes it useful both for pedagogical purposes and for quick prototyping. The system is available in UCILISP, FranzLisp, and Interlisp versions, together with a simple lexicon facility, example parsers and detailed documentation. I. ACTIVE CHART PARSING: A BRIEF OVERVIEW Constraints on space make it impossible to describe the active chart parsing methodology in any detail. This section merely presents a sketch, to outline the main points and establish salient terminology. For more detail see Kay or Thompson and Ritchie (op. cit.). Active chart parsing takes the idea of a well-formed substring table, which is a passive lattice consisting of one edge per constituent (either initial or discovered), and makes it active by using edges to record partial or incom- plete hypothesized constituents as well. Such active edges contain not only a description of the partial contents of the hypothesized consti- tuents, but also some indication of what more is required to complete them. It follows that the central point in the active chart parsing process occurs when an active and inactive edge meet for the first time. In each such circumstance, one or more new edges may be added if the inactive edge (partially) satisfies the requirements for completion of the active edge. How this is determined will of course depend on the grammati- cal formalism being used. We call this whole operation the fundamental rule. Note that it speaks of adding new edges, not changing old ones - this is crucial, as will become clear later. The fundamental rule alone is not of course sufficient to transform an initial chart of lexical edges into a fully analysed one. For this a source of empty active edges, that is, initial hypotheses about possible constituents, is also required. Such edges typically arise either in response to the addition of inactive edges to the chart (data-driven), or of active edges (hypothesis driven), based on the content of the edges and the grammar. We call the choice a particular parser makes on this issue the rule invocation strategy, as it determines when and how rules in the grammar enter the cha as hypothesized constituents. rt In understanding what follows, the reader must keep in mind that MCHART is not itself a parser - it is a framework for implementing parsers. When one sets out to build a particu- lar parser using MCHART as a base, decisions must be made on each of the dimensions set out in the preceding section. Each subsection below specifies the interface whereby one such decision is communicated to MCHART. A. The Agenda MCHART is based on an agenda mechanism for dealing with the non-determinism inherent in its structure. This mechanism maintains an ordered set of ordered queues of function applications. It proceeds by removing and evaluating the first entry in the highest priority queue. Primitives exist for adding function applications to the beginning or end of particular queues, and for From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. dynamically reordering a given queue. Modelled on the agenda mechanism of the GUS system (Bobrow et al, 1977a, this approach provides a simple yet powerful tool, whose use pervades MChART and provides its overall architectural structure. B. Data structures and the submission of edges The chart is composed of vertices and edges. A vertex is a named collection of edges. The edges are held in four fields, distinguishing active from inactive and incoming from outgoing. An edge contains its left and right vertices (not by name, but directly, so the structures are circular), its name, and a label, which is not interpreted by MCHART, but is for use by each particular parser. Note that edges are not explicitly identified as active or inactive - only implicitly by where they are attached to their vertices. each pair which satisfies a parser-provided predicate. Similarly, a signal called AddingActiveEdge or AddingInactiveEdge is raised depending on the type of edge being added. There are no defaults for these, but the parser will typically respond to one or the other to implement a (top-down or bottom-up respectively) rule invocation strategy. Other signals are raised at the creation of vertices and edges, and at various points during the opening and closing phases of the parsing process. Most of these have standard defaults, which provide simple names for the vertices and edges, construct the lexical edges, initiate top- down parsing, print out results, and so on. D. Search Strategies A function named NewEdge exists for the construction and submism new edges. The caller must specify left and right ends, label, and type (active or inactive). The edge is con- structed immediately, but will not normally be added directly to the chart. Rather a function application to do this is scheduled on a queue determined by the type of edge. When this application is evaluated, the edge is added to the chart. If the original caller provided a redundancy checking predicate, for instance to avoid indefinite left recursion in a top-down system, this will be evaluated at this point as well, and may forestall the addition of the edge. C. Signals, the Fundamental Rule, and Rule Invocation In constructing a framework intended to support a number of different styles of use, it proved useful to provide an interface less rigid than that of function call and return. At salie points in the basic operation of MCUART, the system raises named signals. If the particular parser involved has not declared a response to such a signal, processing continues. But if a response has been declared, in a global signal table, then the function named therein is called with appropriate arguments. Not only does this allow the parser writer to easily choose what signals to ignore and what to respond to, but it :nt The agenda mechanism provides the necessary hooks for controlling search. The parser can specify the relative priority of the queuesfor new inactive edges (distinguishing lexical from com- plex if desired), new active edges, and the edge pairs resulting from the application of the funda- mental rule. The queueing disciplines (LIFO, FIFO, or computed) can also be specified for each of these queues. In general, LIFO will produce a depth-first search, while FIFO will produce a breadth-first search, at least in conjunction with a top-down rule invocation strategy. E. Functional interfaces and grammatical formalisms At several points in the above description, reference has been made to the parser writer not only naming functions as response to signals, but also providing functions for specific tasks. At each point in the parsing process where some- thing must be done to an edge or edges, but the way it is done depends on the internal structure of the edge label, that is, on the particular grammatical formalism involved, then MCUART calls a function which the user must provide. Includ- ing the functions called by the default response to the Fundamental signal, there are five such functions, two for the fundamental rule, two involved in displaying the contents of edges, and one for constructing lexical edges. III. EXPERIENCE IN USING MCJJART also allows a simple mechanism for the expression of defaults. There are in fact two signal tables - the public one, which is checked first, and one private one, which is checked only in the absence of a relevant entry in the public one. This mechanism is used both for the funda- mental rule and for rule invocation strategies. Whenever an edge is actually added to the chart, a signal called Fundamental is raised for each active-inactive edge pair which results. The parser may provide its own response, but the system provides a default, which simply schedules the application of a parser-provided function to The modular nature of MCHART as set out above was initially determined by a desire to use it in teaching students about parsing. In parti- cular I was concerned to distinguish between the issues of search strategy, rule invocation stra- tw , and grammatical formalism. By using a distinct type of interface for the specification of each of these, their independence was demonstra- ted, and it became possible to vary them indepen- dently. Example parsers have been constructed within which top-down and bottom-up invocation strategies can be contrasted, and each considered with respect to depth-first or breadth-first 409 search. As for grammaticl formalism, students at a 1981 Open University one week residential course were able to successfully convert a parser for context-free phrase structure grammars to recursive transition network, despite having no prior experience with either MCHART or Interlisp. Implementations also exist for various varieties of context-free grammars, including ones with optional or iterated constituents and with various feature systems, for RTNs, and for ATNs, with a number of different styles of tests, actions, and structures. I have also developed a large parsing system for the GPSG grammatical formalism (Gazdar 1981c; Thompson 1981b- Thompson 1982b) using MCHART and have found it extremely useful to be able to focus on the issues of linguistic concern as the system has evolved, leaving MCHART to take care of the details. The flexibility it provides more than makes up for the price in efficiency which must be paid for it. I have found this particularly true in the area of prototyping, where a few days work has allowed a quick investigation of areas as diverse as lexical access using tree-structured lexicons and extended quasi-context-free forma- lisms for some crucial problems in linguistic theory (Bresnan Kaplan Peters and Zaenen 1982; Thompson 1983a). AI has often been criticsied for failing to produce work which builds on other work. If this paper were just about another parser, it would be liable to that criticism. But as it is about a basis for producing parsers, I hope it can go a small way towards helping others avoid re-inventing the wheel. Enquiries are invited from those who would like to make use of MCHART as described herein - try it, you might like it. REFERENCES Bobrow, D. G., R. M. Kaplan, M. Kay, D. A. Norman, H. S. Thompson and T. Winograd. 1977a. "GUS-l, A frame-based dialog system". Artificial Intelligence 8(l). Bresnan, J.W., R. Kaplan, S. Peters and A. Zaenen. 1982. "Cross-serial dependencies in Dutch". Linguistic Inquiry 13. Gazdar, G. 1981c. "Phrase structure grammar". In P. Jacobson and G. Pullum, editors, The nature of syntactic representation. D. Reidel, Dordrecht. Kay, M. 1980. "Algorithm Schemata and Data Structures in Syntactic Processing". In Proceedings of the Symposium on Text Processing. Nobel Academy. To appear. Also CSL-80-12, Xerox PARC, Palo Alto, CA. Thompson, H. S. 1981b. "Chart Parsing and Rule Schemata in GPSG". In Proceedings of the Nineteenth Annual Meeting of the Associa- tion for Computational Linguistics. ACL, Stanford, CA. Also DA1 Research Paper 165, Dept. of Artificial Ihtelligence, Univ. of Edinburgh. Thompson, H. S. 1982b. "Handling Metarules in a Parser for GPSG". In Barlow, M., D. Flickinger and I. Sag, editors, Develop- ments in Generalised Phrase Structure Grammar: Stanford Working Papers in Grammatical Theory, Volume 2, Indiana University Linguistics Club, Bloomington. Also DA1 Research Paper 175, Dept. of Artificial Intelligence, Univ. of Edinburgh. Thompson, H. S. 1983a. "Crossed Serial Dependencies: A low-power parseable extension to GPSG". -In Proceedings of the Twenty-first Annual Meeting of the Association for Computational Linguistics. ACL, MIT, Cambridge, MA. To appear. Thompson, H. S. and G. D. Ritchie. 1983. "Techniques for Parsing Natural Language: Two Examples". In Eisenstadt, M. and T. O'Shea, editors. Artificial Intelli- gence Skills. Harper and Row, London. Also DA1 Research Paper 183, Dept. of Artificial Intelligence, Univ. of Edinburgh. I 410
1983
26
218
MAPPING BETWEEN SEMANTIC REPRESENTATIONS USING HORN CLAUSES’ Ralph M. Weischedel Computer & Information Sciences University of Delaware Newark, DE 197 11 ABSTRACT Even after an unambiguous semantic interpretation has been computed for a sentence in context, there are at least three reasons that a system may map the semantic representation R into another form S. 1. The terms of R, while reflecting the user view, may require deeper understanding, e.g. may require a version S where metaphors have been analyzed. 2. Transformations of R may be more appropriate for the underlying application system, e.g. S may be a more nearly optimal form. These transformations may not be linguisticly motivated. 3. Some transformations structural context. depend on non- Design considerations may favor factoring the process into two stages, for reasons of understandability or for easier transportability of the components. This paper describes the use of Horn clauses for the three classes of transformations listed above. The transformations are part of a system that converts the English description of a software module into a formal specification, i.e. an abstract data type. 1 RESEARCH SPONSORED BY THE AIR FORCE OFFICE OF SCIENTIFIC RESEARCH, AIR FORCE SYSTEM COMMAND, USAF, UNDER GRANT NUMBER AFOSR-80-O 19OC. THE UNITED STATES GOVERNMENT IS AUTHORIZED TO REPRODUCE AND DISTRIBUTE REPRINTS FOR GOVERNMENTAL PUPOSES NOTWITHSTANDING ANY COPYRIGHT NOTATION HEREIN. TRODUCTDON Parsing, semantic interpretation, definite reference resolution, quantifier scope decisions, and determining the intent of a speaker/author are well-known problems of natural langauge understanding. Yet, even after a system has generated a semantic representation R where such decisions have been made, there may still be a need for further transformation and understanding of the input to generate a representation S for the underlying application system. There are at least three reasons for this. First, consider spatial metaphor. Understanding spatial metaphor seems to require computing some concrete interpretation S for the metaphor; however, understanding the metaphor concretely may be attempted after computing a semantic representation R that represents the spatial metaphor formally but without full understanding. Explaining the system’s interpretation of a user input (e.g. for clarification dialog, allowing the user to check the system’s undersanding, etc.) is likely io be more understandable if the terminology of the user is employed. By having an intermediate level of understanding such as R, and generating English output from it, one may not have to recreate the metaphor, for the terms in R use it as a primitive. Second, the needs of the underlying application system may dictate transformations that are neither essential to understanding the English text nor linguisticly motivated. In a data base environment, transformations of the semantic representation may yield a retrieval request that is computationally less demanding [ 1 11. To promote portability, EUFID Cl31 and TQA [61 are interfaces that have a separate component for transformations specific to the data base. In software specification, mapping of the semantic representation R may yield a form S which is more amenable for proving theorems about the specification or for rewriting it into some standard form. The following example, derived from a definitron of stacks on page 77 of Cl 01 illustrates both of the reasons above. A stack is an ordered list in which all insertions and deletions occur at one end called the top. 424 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. A theorem prover for abstract data types would normally assume that the end of the stack in question is referred to by a notation such as A[ 11 if A is the name of the stack, rather than understanding the spatial metaphor “one end”. Third, it may be convenrent to design the transformation process in two phases where the output of both phases is a semantic representation. In our system, we have chosen to map certain paraphrases into a common form via a two step process. The forms “ith element” and “element i” each generate the same term as a result of semantic interpretation. However, the semantic interpreter generates another term for “element at position i” due to the extra lexical items “at” and “position”. Obviously, all three expressions correspond to one concept. The mapping component recognizes that the two terms generated by the semantic interpreter are paraphrases and maps them into one form. Section 2 gives an overview of the system as a whole. Section 3 describes the use of Horn clauses for the mapping from R to S. Related research and our conclusions are presented in sections 4 and 5. 2. BRIEF SYSTEM OVERVIEW The overall system contains several components beside the mapping component that is the focus of this paper. The system takes as input short English texts such as the data structure descriptions in [lo]. The output is a formal ;Ispecification clauses . of the data structure defined in Horn First, the RUS parser [3], which includes a large general-purpose grammar of English, calls a semantic component to incrementally compute the semantic interpretation of the phrases being proposed. As soon as a phrase is proposed by the grammar, the semantic interpreter either generates a semantic interpretation for the phrase or vetoes the parse. The only modifications to adapt the parser to the application of abstract data types were to add mathematical notation, so that phrases such as “the list (A [ 11, A[2], . . . . A [Nl I” could be understood. Thus, a text s%ch as the following can be parsed by the modified grammar . 2 Horn clauses are a version of first order logic, where all well- formed formulas have the form C IF Al & A2 & . . . & An. Each of the AI IS an atomic formula; C IS an atomic formula; and n>=CJ. Therefore, all variables are free. 3 Thts IS a modlfled version of a definition given on pages 41-42 of ClOl. 1. We say that an ordered list is empty or it can be written as (A[ 11, A[2], . . . . A [N]) where the AC11 are atoms from some set S 2. There are a variety of performed on these lists. operations that are 3. These operations include the following 4. Find the length N of the list. 5. Retrieve the ith element, lc=l<=N 6. Store a new value at the ith position, l<=l<=N. 7. Insert a new element at position I, l<=l<=N+ 1 causing elements numbered I, I+ 1, . . . . N to become numbered I+ 1, l+2, . . . . N+ 1. 8. Delete :he element at position I, l<=l<=N causing elements numbered I+ 1, . . . . N to become numbered I, I+ 1, ., N- 1. The semantic component we developed employs case frames for disambiguation and generation of the semantic interpretation of a phrase. However, the semantic component does not make quantifier scope decisions. Quantifier scope decisions, reference resolution, and conversion from first-order logic to Horn clauses is performed after the semantic interpreter has completed its processing. The knowledge governing these three tasks is itself encoded in Horn clauses and was developed by Daniel Chester. The output from this component is the input to the mapping component, which is the focus of this paper. In the appendix, examples of the Horn clause input to the mapping component are given for some of the sentences of the text above. The semantic representation R of a single sentence is therefore a set of Horn clauses. In addition, the model of context built in understanding the text up to the current sentence is a set of Horn clauses and a list of entities which could be referenced in succeeding sentences. The mapping component performs the three tasks described in the previous section to generate a set S of Horn clauses. S is added to the model of context prior to processing the next input sentence. The choice of Horn clauses as the formal representatron of the abstract data type is based on the following motivations: 1. Once a text has been understood, the set of Horn clauses can be added to the knowledge base (which is also encoded as Horn clauses). This offers the potential of a system that grows in power. 425 2. The semantics of Horn clauses, their use in theorem proving, and their executability makes them an appropriate formalism for defining abstract data types. 3. A Horn clause theorem prover [5] allowing free intermixing of lisp and theorem proving is readily available. 3. MAPPING IN THE SYSTEM The rules of the mapping component are all encoded as Horn clauses. The antecedent atomic formulas of our rules specify either 1. the structural change to be made in the collection of formulas or 2. conditions which are not structural in nature but which must be true if the mapping is to apply. An underscore preceding an identifier means that the identifier is a variable. We will use the notation (MAPPING- RULE (al . . . am) -x (c 1 . . . ck) -y) to mean that the atomic formulas a 1 .., am must be present in the list -x of atomic formulas; the list -x of formulas is assumed to be implicitly conjoined. The variable -y will be bound to the result of replacing the formulas al, . . . . am in -x with the formulas c 1, . . . . ck. There is a map between two lists, -x and -y, of atomic formulas if (MAP -x -y) is true. Three examples are detailed next. For expository purposes the rules given in this section are simplified. Consider the following example: A stack is an ordered list in which all insertions and deletions occur at one end called the top. ADD/IS) adds item I to stack S. In this environment spatial metaphors tend to be more frozen than creative. To understand “one end”, we assume the following rules: 1. For a sequence -D, we may map “-E is an end of -D” to “-E is the first sequence element of -D”. 2. An ordered list is a sequence. Facts ( 1) and (2) are encoded as Horn clauses below. I. (MAP -X ,Y) IF (SEQUENCE -D) & (MAPPING-RULE ((END -E -D)) -X ((SEQUENCE-ELEMENT -E 1 -D)) -Y) 2. (SEQUENCE -D) IF (ORDERED-LIST ,D) The system knows how to map the notion of “end of a sequence”, and it knows that ordered lists are sequences. Since the first sentence is discussing the end of an ordered list, the two rules above are sufficient to map “end” into the appropriate concrete semantic representation. The power and generality of this approach is that - a chain of reasoning may show how to view some entity -D as a sequence (and therfore the rules show how to interpret “end of -D”), and - other mapping rules may state how to interpret spatial metaphors unrelated to “end” or to sequences. We propose that the same mechanism can deal with certain vague, extended uses of words, such as add in the previous example. In stating that ADD(I,S) adds item I to stack S, add cannot be predefined for stacks, since its meaning is being defined. Nevertheless, it is reasonable to assume that there is a general relation between add and related concepts such as uniting, including, or, in the data structure environments, inserting. Consequently, we propose the following fact in addition to the two above: - For a sequence -S, we may map “add _I to -s” to “insert _I at some position -X of ,S”. It may be stated formally as (MAP -W ,Z) IF (MAPPING-RULE ((ADD -I -S)) -W ((INSERT -I -S -X)) -Z) & (SEQUENCE -S) Notice that -X will be unbound. However, the Horn clauses generated for the first sentence (A stack is an ordered list in which a// insertions and deletions occur at one end ca/led the top) will imply that -X is the position corresponding to the end called top. Therefore, the vague, extended use of “add’ can be understood using the inference mechanism of the mapping component. Other rules may state how to interpret an extended use of add by relating it to views other than sequences. Another example involves mapping the forms ‘7th element”, “element i”, and “element at position i” into the same representation. Assume that the semantic interpreter generates for each of the first two the list of formulas ((ELEMENT -X) (IDENTIFIED-BY -X -Y)). The Horn clause for that mapping is as follows: (MAP -W -Z) IF (SEQUENCE -T) & (TOPIC -T) & (MAPPING-RULE ((ELEMENT ,X) (IDENTIFIED-BY -X ,Y)) -W ((SEQUENCE-ELEMENT -X -Y -T)) -Z) Note that this rule assumes that in context some sequence -T has been identified as the topic; the rule identifies that the element -X is the -Yth member of the sequence -T. For the phrase “element at position i”, assume the semantic interpreter generates the list of formulas ((ELEMENT -XI (AT -X (POSITION -PII (IDENTIFIED-BY -P -VI). The mapping rule for it is similar to the one above. (MAP -W -2) IF (SEQUENCE -T) & (TOPIC -T) & (MAPPING-RULE ((ELEMENT -XI (AT -X (POSITION -PII (IDENTIFIED-BY -P -Y)) -W ((SEQUENCE-ELEMENT -X -Y -T)) ,Z) This second rule must be tried before the prior one. The mapping component maps from a representation R of a single sentence to another representation S, given the context of the sentence. For Horn clauses, mapping rules may apply to the antecedents or to the consequents. For each Horn clause, its antecedents are collected in a list and bound to -W in the relation (MAP -W -Z). The result of applying one rule is bound to -Z. Mapping is then tried on the result -Z, and so on until no more rules apply. In additin to applying to antecedents, mapping rules apply to the consequents C 1, . . . . Ck (k>O) of Horn clauses having the same list of antecedents. C 1, . . . . Ck are collected in a list and bound to -W in the relation (MAP -W -Z) as for the case of antecedents. The mapper halts when no more rules can be applied. The result of the mapping is a new set of Horn clauses which are the representation S. 4. RELATED WORK A number of applied Al systems have been developed to support automating software construction [ 1, 8, 2, 71. Of these. only our effort has focussed on the mapping problem. Viewing spatial metaphors in terms of a scale was proposed in [S]. Our model is somewhat more general in that the inference process - permits specific constraints for each metaphor, not just the one view of a scale, and - accounts for other mapping addition to spatial metaphor. problems in A very similar approach to mapping has been proposed in [12]. Instead of using Horn clauses as the formalism for mapping, they encode their rules in KL-ONE c41. The concern in [ 121 is inferring the appropriate service to perform in response to a user request, rather than demonstrating means of interpreting spatial metaphors or of finding contextually dependent paraphrases. 5. CONCLUSIONS There are several reasons why one may have a second transduction phase even after a semantic representation for an utterance has been computed. The advantage of using Horn clauses (or any other deduction mechanism) in this mapping phase is the ability to include nonstructural conditions. This means that the mapping rules may be based on reasoning about the context. There are three areas for future work: - generating texts, mapping - investigating use of the mapping reference resolution, and on additional component in - developing an indexing technique to mapper in a forward chaining mode. run the APPENDIX For sentences 3 through 7 of the example text in section 2, we include here the actual Horn clauses that serve as the output of the semantic component and as the input to the mapping component. The English that generated the Horn clauses is provided for reference in italics; it is not supplied as input to the mapping component. Ampersands have been inserted for expository purposes. These operations include the following. (((INCLUDE A 16 A3401 IF (FOLLOW A3401 & (EQUIV Al6 (SETOF A0034 (AND (OPERATION A0034) (PERFORM NIL A0034 A231111 & (LIST A23) & (ORDER NIL A23))) Find the length, N, of the list. (((EQUIV (A0037 A231 NI IF (LIST A231 & (ORDER NIL A2311 ((LENGTH (A0038 A231 A231 IF (LIST A231 & (ORDER NIL A2311 (((EQUIV (A0038 A231 (A0037 A23111 IF (LIST A231 & (ORDER NIL A23)) ((FOLLOW (FIND NIL (A0038 A23111 IF (LIST A23) & (ORDER NIL A231)) Retrieve the ith element, I<= I<= N. (((LE 1 1) IF (ELEMENT A221 & (IDENTIFIED-BY NIL A22 I)) ((LE I N) IF (ELEMENT A22) & (IDENTIFIED-BY NIL A22 I)) ((FOLLOW (RETRIEVE-FROM NIL A22 NIL)) IF (ELEMENT A221 & (IDENTIFIED-BY NIL A22 1))) 427 Store a new value into the ith position, l<=l<=N. (((LE I I) IF (POSITION A33) & (IDENTIFIED-BY NIL A33 I) & (VALUE Al 5) & (NEW Al 5)) ((LE I N) IF (POSITION A331 & (IDENTIFIED-BY NIL A33 I) & (VALUE A15) & (NEW A15)) ((FOLLOW (STORE NIL Al 5 (INTO A331)) IF (POSITION A331 & (IDENTIFIED-BY NIL A33 I) & (VALUE A151 & (NEW A15111 Insert a new element at position I, I<=l<= N+ I causing elements numbered I,/ + I,...,N to become numbered I + I,! +2 I..., N+ 7. (((POSITION (A0062 A 18)) IF (ELEMENT A181 & (NEW A1811 ((IDENTIFIED-BY NIL (A0062 A181 I) IF (ELEMENT Al 8) & (NEW A18)) ((LE 1 I) IF (ELEMENT Al 8) & (NEW A 18)) ((LE I (PLUS N 1)) IF (ELEMENT A 18) & (NEW A 18)) ((FOLLOW (INSERT NIL Al 8 NIL (AT (A0062 Al 8)))) IF (ELEMENT A181 & (NEW A1811 ((ITEM-OF (A0063 A54 A181 NIL (SEQUENCE (PLUS I 1) (PLUS I 2) ELLIPSIS (PLUS N 1))) IF (ELEMENT A541 & (ITEM-OF A62 NIL (SEQUENCE I (PLUS I 1) ELLIPSIS N)) & (IDENTIFIED-BY NIL A54 A621 & (NUMBER A621 & (ELEMENT Al 8) & (NEW A1811 ((CAUSE (INSERT NIL Al 8 NIL (AT (A0062 A 18))) (COME-ABOUT (AND (IDENTIFIED-BY NIL A54 (A0063 A54 A18)) (NUMBER (A0063 A54 A 18))))) IF (ELEMENT A541 & & & Cl1 L-21 c31 (ITEM-OF A62 NIL (SEQUENCE I (PLUS I 1) ELLIPSIS N)) (IDENTIFIED-BY NIL A54 A621 (NUMBER A62) & (ELEMENT Al 8) & (NEW A18111 REFERENCES Robert Balzer, Neil Goldman, and David Wile. Informality in Program Specification. IEEE Transactions on Software Engineering SE-4(2), March, 1978. Alan W. Biermann and Bruce W. Ballard. Toward Natural Language Computation. American Journal of Computational Linguistics 6(2), 1980. R. J. Bobrow. The RUS System. In B.L. Webber, R. Bobrow (editors), Research in Natural Language Understanding, Bolt, Beranek and Newman, Inc., Cambridge, MA, 1978. BBN Technical Report 3878. c41 c51 II431 c71 [81 c91 Cl01 Cl 11 II121 Cl31 Ronald Brachman. A Structural Paradigm for Representing Knowledge. Technical Report, Bolt, Beranek, and Newman, Inc., 1978. Daniel L. Chester. HCPRVR: An Interpreter for Logic Programs. In Proceedings of the National Conference for Artificial Intelligence, pages 93-95. American Asociation for Artificial Intelligence, Aug, 1980. Fred J. Damerau. Operating Statistics for the Transformational Question Answering System. American Journal of Computational linguistics 7(1):30-42, 1981. Fernando Gomez. Towards a Theory of Comprehension of Declarative Contexts. In Proceedings of the 20th Annual Meeting of the Association for Computational Linguisitcs, pages 36-43. Association for Computational Linguistics, June, 1982. C. Green. The Design of the PSI Program Synthesis System In Second International Conference on Software Engineering. IEEE Computer Society, Ott, 1976. Jerry R. Hobbs. What the Nature of Natural Language Tells us about how to Make Natural-Langauge-Like Programming More Natural. In SIGPLAN Notices, pages 85-93. SIGPLAN, 1977. Ellis Horowitz and Sartaj Sahni. Fundamentals of Data Structures. Computer Science Press, Woodland Hills, CA, 1976 Jonathan J. King. Intelligent Retrieval Planning. In Proceedings of the National Conference for Artificial Intelligence, pages 243-245. American Asociation for Artificial Intelligence, Aug, 1980. William Mark. Rule-Based Inference In Large Knowledge Bases. In Proceedings of the National Conference on Artificial Intelligence. American Association for Artificial Intelligence, August, 1980. Marjorie Templeton and John Burger. Problems in Natural Language Interface fo DBMS with Examples from EUFID. In Conference on Natural Language Processing, pages 3- 16. Association for Computational Linguistics, Feb, 1983. 428
1983
27
219
J. Bczdwnko, D. k?kdi8, and E. F’itzpatrick Computer Science and Systems Branch Information Technology Division Naval Research Laboratory Washington, D.C. 20375 ABSTRACT At the Naval Research Laboratory, we are build- ing a deterministic parser, based on principles pro- posed by Marcus, that can be used in interpreting military message narrative, A central goal of our project is to make the parser useful for real-time applications by constraining the parser’s actions and so enhancing its efficiency. In this paper, we propose that a parser can determine the correct structures for English without looking past the “left corner” of a constituent, i.e. the leftmost element of the constituent along with its lexical category (e.g. N, V, Adj). We show that this Left Corner Con- straint, which has been built into our parser, leads quite naturally to a description of verb comple- ments in English that is consistent with the tidings of recent linguistic theory, in particular, Chomsky’s government and binding (GB) framework. I INTRODUCTION The role of a parser in computer interpretation of English is to determine the syntactic structure of English phrases and clauses. At the Naval Research Laboratory, we are developing a deterministic parser, based on the work of Marcus (1980), that can be used in interpreting military message narra- tive. A major goal of this work is to restrict, in a systematic way, the range of actions a parser can take. Specif?cally, we wish to formulate constraints that will simultaneously enhance parsing efficiency and, following Petrick (1974), permit the “expres- sion and explanation of linguistic generalizations”. In this paper, we propose that in most cases a parser can determine the correct structures for English without looking into subconstituents except at the left corner. By “left corner”, we mean t,he leftmost element of the constituent along with its lexical category (N, V, Adj, etc.). For example, the left corner of 77zeyfaited to inform us is [they, pro- noun]. This Left Corner Constraint thus restricts the parser from examining any information about a constituent other than its syntactic category (e.g. S, NP) and its left corner. We have built this constraint into a parser that is based on the model described in Marcus (1980). The parser has two data structures: a stack of incomplete nodes and a bufTer containing complete nodes that may be terminal elements or completed phrases. The buf7’er can contain up to three consti- tuents. Depending on what is in the buf?er and what is on top of the incomplete nodestack, the parser’s pattern-action rules will start building a new consti- tuent, declare a constituent complete, or attach a constituent to the current incomplete node. The parser is deterministic in that all structures it builds are indelible. While it is restricted from looking at anything in a tree except the left corner, our parser still covers a wide range of English syntax. It happens that this restriction leads to a description of complementa- tion in English that is consistent with the tidings of recent linguistic theory (Fiengo 1974, Chomsky 1981). In what follows we concentrate on the issues raised by verb complementation, in particular, the problem of recognizing complement clauses. First we outline a general solution and then we describe our implementation of this solution as part of a deterministic parser of English. We include a brief discussion of adjective, noun, and preposition com- plementation and review three areas that seem to be exempt from the constraint. Details of the imple- mentation are discussed in Fitzpatrick (1983) and Hindle ( 1983). II VEKl3COPPLEMENTATI[ON The constituents that can occur as verb com- plements include the major phrase categories: noun phrases, prepositional phrases, and clauses. Because these constituents can serve in roles other than verb complement, the parser should build them in a general way without referring to verb complementation. But this means that once a phrase is built, the parser is faced with the problem of determining the phrase’s syntactic relationship to other constituents, i.e. whether the phrase should be attached to the tree as a verb complement or as something else. When the constituent is a noun phrase, deter- mining that it is a verb complement is relatively easy since the only information that is needed about this constituent is its syntactic category, NP. Structural differences among NPs are not relevant to the syntactic restrictions on verb complementa- tion. Thus if a verb is lexically marked to take a complement NP, then any unattached post-verbal NP can serve as its complement. l ’ b sentences where a NP ia displaced, the post-verbal comple ment Position iS fibd by a NP ?kacs ( Fiengo, 1974). Thus t 8 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Prepositional phrases are more complicated since the status of PP as a complement often depends on the preposition. For example, the PP.is a complement in agree OR. a plan and agree mth eve7yon8 else, but it is an adjunct in agree after several how-s of discussion because agree admits a pp complement only if the PP begins with a particu- 1~ preposition, and after is not on its list. In such cases, determining complement status requires the parser to discriminate among particular &@es of a syntactic category. The parser does this by exa.rnm- ing the left terminal of PP, namely, the particular preposition. A sim.ilar solution applies to the identification Of declaratives like the that clause in (la-b). The clause in (la) is a complement to thiti; that in (lb) is an adjunct. (1) a. I thought that I might be free of it. b. I left that I might be free of it, These examples are straightforward because that uniquely identifies a clause as a declarative and verbs can be lexically marked if they take this com- plement type; thus think is marked for a thud- clause, but tetzve is not. In such cases, the parser decides whether or not the clause is a complement by checking the syntactic category (Sentence), the lower left terminal of the sentence (that), and the lexical entry of the verb. For infinitive phrases, however, determining complement status is more difficult.. The infinitive phrase in @a) is a complement to fail, that in (2b) is an adjunct. The one in (2~) is a complement to fcJd* (2) a. The drum ejector fails to function prop- erly. b. The dIurr% ejector must cycle to function properly. C. We found the drum ejector to be faulty. Distinguishing between complements and adjuncts depends on first deciding what the internal structure of a string is, i.e. whether or not a string has the structure of a clause and, if so, then what type of clause. Infinitive phrases raise problems because they give few explicit clues to their internal -- .-- (=&ace) is a ‘place holder’ for the interrogative NP whictr. and the definite NP the C+.VO&S in (i) and (ii), respectively: (i) Which circuits should we check fl (ii) The circuits were checked t prior to installation. Note that the parser must be prevented from attaching constituents as complements when a verb’s complement posi- tions are already filled. For example, repacir takes a single Np as complement, as in repdr Jo-d kingpost and rep&r new c&@s. In repair new shtps forwarCt kinglpost, therefore, for- ward tigpost cannot be interpreted as a second NP object. The problem of complement numbering involves the argument structure of verbs. Although the parser currently uses syntac- tic rules to keep track of the number of complements at- tached, we expect that this should actually be handled by se- mantic interpretation rules that interact with the syntax to monitor argument structure and reject attachments to filled argument positions. structure. In (2a-b), for example, the intiitive has no overt subject and therefore bears little resem- blance to clauses like that in (la-b). Even if there is a noun that can be the subject, as in (2c), the only surface clue that identifies the infinitive is the embedded auxiliary to. Recent work in transformational theory, specifically the government and binding (GB) frame- work of Chomsky (198 l), suggests an analysis of clausal complements that is useful for making pars- ing decisions. In particcllar, we make use of the claim that infinitives and ing phrases have subjects that may be overt, as in (3a-b), or understood, as in (4a-b): (3) a. They wanted [the contractor to make repairs] b. This will facilitate [their making repairs] (4) a. They attempted [to make repairs] b. This will facilitate [making repairs] A subject is overt or understood depending on whether it is a word listed in the lexicon or an abstract NP that is inserted into subject position by syntactic rules. In general, the abstract NP DELTA occurs as a non-lexical terminal in the context to H? The abstract possessive NP DELTAS is a non- lexical terminal that is, inserted in the context -tig VP (where previous syntactic rules have applied to ver6+ing and moved the ing sufpix to a position preceding the verb phrase). Abstract subjects help semantic interpretation determine the argument structure of propositions without adding new structure to the syntactic tree. For example, to identify the agent of flz in He prom- tied them to f?x a, semantic interpretation need only mark as coreferential the DELTA subject of f2z and the matrix subject He, using structural con- straints on the coindexing of NPs. Because they have subjects, infinitive and ing phrases are assigned the structure of a clause, analogous to the embedded declarative of (la-b). Consequently, we now have all the properties that are needed to identify a clause type by its syntactic category and leftmost terminal. The general rule is: If the clause begins with a complementizer (i.e. that, for, or a wh phrase such as who, what, wtie, how), then this complementizer is the left terminal that identifies the clause type; if the clause has no complementizer, then the subject NP (which is lexical or abstract) is the lower left terminal that identifies the clause type. We refer to clauses that contain a complementizer as S-bar nodes and assign them the structure in (5) (the COMP node contains the complementizer): (5) S-bar A COMP A N-P Aux VP Clauses without a complementizer are simply called S nodes and have the structure NP + Aux + VP. As we observed earlier, identifying declaratives like that in (la-b) is fairly easy because the leftmost terminal is ‘the complementizer that, which uniquely specifies the clause, and because verbs can be lexically marked for taking that clauses as com- plements. For-to infinitives like the one in (6) are another clause type that can be identified fairly easily. (6) We will arrange [for the shipyard to com- plete repairs ] Ln this case, the left corner is the complementizer foT, which uniquely specifies the clause as an infinitive with a lexical subject. The verbs arrange, wend, prefer, and hate, among others, are lexi- tally marked for taking a fopto complement. Identifying wh clauses like those in (7a-b) is equally straightforward. (7) a; Our investigation will indicate (whether we can repair the ring] b. Our investigation will indicate [whether to repair the ring] Since the wh words always mark the beginning of a wh clause and since the interior of the clause is irrelevant to its complement status-- wh clauses can be either declarative or inf?.nitival--the parser only needs to examine the left corner of the clause in order to identify it correctly. This is true even if the wh word is embedded in a prepositional phrase, as in, (8) The investigation will indicate [for which unit repairs should be implemented] The only time prepositions occur in a COMP node is when they are part of a wh phrase like for which, with whom, by how many days, etc. Conse- quently, a clause whose dominating node is S-bar and whose leftmost terminal is lexically specified as [preposition] can always be identified correctly as a wh clause. The embedded clause in (8); where for is a preposition, is therefore distinguished from the embedded clause in (6)) where for is lexically specified as a complementizer. It is also dis- tinguished from the subordinate clause in We asked for we wanted to be sure, where for is a preposi- tion, as in (a), but the dominating node is a PP with the structure P + S. If the embedded clause has no complementizer, it has no COMP node. Therefore, the left edge of the clause will be an abstract subject, as in ( &-$, or a lexical subject, as in (8d): (8) a. Ship’s force attempted [DELTA to make repairs] b. He promised them [DELTA to f?ix it] C. They tried [DELTA’S installing a new antenna] d. We found [the transistors were bad] Because DELTA is only inserted in the context 20 I@? a clause with DELTA as its lower left termi- nal will always be identified correctly as an in6nitive. Similarly, DELTA’S is only inserted in the context -Gzg VP, so that a clause with DELTA’S in the left corner can always be identified as an -ing clause. When a clause begins with a lexical subject, however, it can be either an infinitive or a declara- tive; thus it is the subject of a declarative in (10a) but the subject of an infinitive in (lob). (10) a. We assumed [it was inoperable]. b. We assumed [it to be inoperable]. This would be a serious problem if the distinction between declaratives and infinitives were relevant in determining complement status for verbs like assume. But as it happens, verbs like assume do not discriminate between infinitives and declara- tives; the interior of the clause is irrelevant since only the subject type counts. This generalization holds for each of the verbs in (1.1):” (11) assume, believe, claim, conclude, confirm, consider, demonstrate, discover, establish, feel, And, know, learn, observe, note, notice, report, say, show, suppose, think The presence of a lexical subject in comple- ments to the verbs in (11) thus parallels exactly the situation we described with the wh clauses: once the parser Ands the left corner it needs no further information about the clause because the verbs that choose this complement type do not discriminate between declaratives and infinitives. Data from other phrase types suggest that our account of complementation should not be limited to the verb system. Complements to adjectives, nouns, and prepositions follow patterns that parallel the ones we have just described. Each of these categories takes PP complements, e.g. sony for them (Adji-PP), a promise to them (N+PP), and porn behti some parked CUTS (P+PP). Each also takes clausal complements, although the range of clause types differs for each category. Infinitives %otice that the verbs concl~&, learn, and say take an infinitive complement only in their passive form, e.g. ?%i.s is said to be a fact, 7%~ was learned to be a fact, where the sub- ject of the infinitive is a trace. Our parser assumes, following current transformational theory, that the syntax treats a trace in embedded subject position in the same way it treats a lexi- cal subject (Chomsky 1981). The generalization includes verbs like seem and appear if, following Marcus (1980) and recent linguistic theory, infinitive complements to these verbs are analyzed with a trace subject, like the complements to learn and say.- Thus (i) and (ii) are analogous to (10a) and (lob), respectively: (i) It seems [the transistors are bad] (ii) The transistors seem [t to be bad] 10 with a lexical subject and no complementizer only occur with verbs. For example, the verb assume takes an infXtive complement in We assumed it to Be inoperable but the noun assumption does not; ow assumption it to be inoperable is not a possible N + S expression (although that clauses occur with both V and N, e.g. We ussumed that it was inopey- able, OUT assumption that it was inoperable). Nouns and adjectives do, however, take for-to and DELTA-subject complements; eager for them to ietxve and eager to leave are A& the plans for them to meet and the plm~s to meet are NPs. Our investi- gations, though not yet complete, thus support a description of complementation in NPs, APs, and PPs that is consistent with the claims of the Left Corner Constraint. v CATIONSOFTHECON It has often been assumed that, in order to make the correct decisions about a constituent, a parser must have access to certain information about the constituent’s internal structure. Methods of providing this information explicitly include the “annotated surface structures” of Winograd (1972) and Marcus (1980), where nodes contain bundles of features that specify certain properties of internal structure, e.g. whether a clause is declarative or infinitive. The Left Corner Constraint introduces a new approach by claiming that all relevant informa- tion about internal structure can be inferred from the leftmost terminal of a constituent. The use of additional devices to record syntactic structure thus becomes unnecessary when the parser incor- porates this constraint together with appropriate grammatical formalisms. In some cases, however, features and other explicit devices are needed. We know of three: the attachment to COMP of a PP containing a wh phrase, the agreement between heads of a phrase, and the recognition of idioms. Specifically, when PP comtains a wh feature, a COMP node can be created only after the wh feature has been percolated up to the PP node from an internal position that can be deeply embedded, e.g. in. how many o!uys, from behind which cars. Agreement requires that features like “plural”, “singular”, and “human” be projected onto a phrase node (e.g. a subject NP) and matched against features on other phrase nodes. Idioms like make headway in we ma& sub- stantial heodway also require the parser to have access to more than the left corner. Each of these cases involves special complica- tions that may explain why they are exempt from the Left Corner Constraint. In a PP, these compli- cations have to do with the depth of embedding of a wh word and with the optionality of preposition stranding (as in They need to know which units to look into). Complications from semantic interpre- tation arise with agreement patterns, which depend on selection as well as syntactic features, and with idioms, which reflect the interaction of syntactic structure and metaphor. VI CONCLUSIONS Our discussion of the Left Corner Constraint has focussed on results obtained in our studies of verb complementation, in particular, clausal com- plements. We have shown that, given certain con- cepts from the GB framework, the constraint enhances parsing efficiency because it allows the parser to infer properties of internal structure from the leftmost terminal; the parser can therefore avoid mentioning these properties explicitly. Specifically, we have shown that: (1) Within a complement category (e.g. preposi- tional phrase, clause), complement types can be distinguished according to their leftmost terminal. (2) The leftmost terminal of a clausal comple- ment is always a complementizer or a subject NP, even if the complement is a ‘subjectless’ clause. Clause types are therefore distinguished by their complementizer or by their subject NP, which is either a lexical item or one of the abstract NPs DELTA or DELTAS. (3) Verbs discriminate among clause types according to the leftmost terminal of a clause. Hence, the distinctions between tensed and infinitival clauses is important in verb complemen- tation only when it coincides with the distinctions among left corner elements (4) The Left Corner Constraint can lead to. a more general description of complementation that includes the complement system of adjectives, nouns, and prepositions. DGEllbWWS We would nk Ralph Grishman, Con- stance Heitmeyer, and Stanley Wilson for many helpful comments on this paper. Chomsky, N. 1981. Lechres on Government and &&kg. Dordrecht: Foris Publications. Fiengo, R. 1974. On Trace Theory. Linguistic Inque. vol. 8, no. 1. Fitzpatrick, E. 1983. Verb Complements in a Deter- ministic Parser. NRL Technical Memorandum #7590-077, March 1983. Hindle, D. 1983. User Manual for Fidditch, A Deter- ministic Parser. NRL Technical Memorandum #7590-142. June 1983. Marcus, M. 1980. A Theory of Syntactic Recogni- tin for Nt~t~~al Language. Cambridge, MA: MIT Press. Petrick., S. R. 1974. Review of Winograd, “A Pro- cedural Model of Natural Language Understanding.” Computing Reviews, 15. Winograd, T. 1972. Understanding Natural Lunguage. New York: Academic Press.
1983
28
220
ABSTRACT TRACKING USER GOALS IN AN INFOEMATION-SEEKING ENVIRONMENT Sandra Carberry Department of Computer Science University of Delaware Newark, Delaware 19711 This paper presents a model for hypothesizing and tracking the changing task-level goals of a speaker during the course of an information- seeking dialogue. It allows a complex set of domain-dependent plans, forming a hierarchial structure of component goals and actions. Our model builds the user's plan as the dialogue progresses, maintains both a local and a global plan context, and differentiates between past goals and goals currently pursued by the user. This research is part of a project to develop a robust natural language interface. If an utter- ance cannot be interpreted normally or a response cannot be generated due to pragmatic overshoot, the strong expectations about the utterance pro- vided by our context model can be used as an aid in processing the input and producing useful responses. I. INTRODUCTION - Determining the goals and plans of the speaker is essential in understanding natural language dialogue. A cooperative participant uses the information exchanged during the dialogue and his knowledge of the domain to hypothesize the speaker's goals and plans for achieving these goals. The speaker formulates his utterances under the assumption that they will be interpreted in this manner. This context of goals and plans provides clues for interpreting utterances and formulating cooperative responses. In the following, the second utterance can only be interpreted within the context of the speaker's goal, as communicated in the first utterance. "I want to cash this check. Small bills only, please." Similarly, a useful response to the query "Is Prof. Smith semester?" teaching Expert Systems next might be (1) if the speaker wants to take Expert Systems with Dr. Smith, (2) if the speaker's pri- mary interest is the Expert Systems course, or (3) if the speaker's primary interest is Dr. Smith : 1. "No, but Prof. it next year." Smith is scheduled to teach 2. "No, Prof. Jones is teaching Expert Sytems next semester." 3. "No, Prof. Smith is teaching Language Processi .ng next semester." Natural This paper presents a model for hypothesizing and tracking the changing goals of a speaker dur- ing the course of an information-seeking dialogue. Our research differs from previous work in an information-seeking environment in three ways: 111 L-21 [31 The knowledge base allows domain of goals and plans for a complex The context mechanism builds the speaker's plan as the dialogue progresses and dif- ferentiates between local and global plan context. The hi story mechanism incorporates plans into the overall plan context. II. OVERVIEW - A plan in our system (called TRACK) previous is a hierarchinl structure of component goals and actions, each of which has an associated plan or is a primitive in the domain. Plans are represented using a STRIPS formalism [Fikes and Nilsson,lg71]; each plan contains preconditions, a set of partially ordered actions, and effects. Such a plan can be expanded to any level of detail; its full expansion will contain goals and actions that are also components of other fully expanded plans. The existence of a goal/action as an entity within the hierarchy indicates that it has a well-defined plan which an agent may follow and captures the generality of this entity within several higher level plans. In most cases, a complete plan for the speaker cannot be built during the first part of a dialogue. Our approach is to infer a lower-level goal, relate it to potential higher-level plans, and build the complete plan context as the dialo- gue progresses. The local context is the goal and associated plan upon which the speaker is currently focused; the global eontext includes higher level goals and plans which led to the current local context. The context mechanism dis- tinguishes local and global contexts and uses these to predict new speaker goals from the current utterance. From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. TRACK was implemented for an information- seeking environment, as part of an ongoing project to develop a robust natural language interface. The domain is the courses, requirements, and poli- cies for students at a university. It is assumed that the system and user share the belief that the user wants to obtain information relevant to a program of study snd that the system is a capable and cooperative provider of such information. The context of speaker goals and plans constructed by TRACK can be used to interpret ill-formed input, handle pragmatic overshoot [Sondheimer&Weischedel, 19801, and produce helpful responses. To transfer to another area, such as seeking information about real estate, only the corpus of domain-dependent plans and goals must be recon- structed; the decision-making heuristics need not be altered. III. GOAL PROCESSING At least three types of goal structures appear necessary: the immediate goal, derived goals/actions, and focused plans. The immediate goal is extracted directly from the semantic representation of the literal interpretation of the speaker's utterance. A derived goal or action is inferred from the immediate goal; it relates requests for information and indirect speech acts to task-dependent goals or actions. The focused goal is the goal that the speaker is currently pursuing; it has an associated focused plan. This focused plan produces the strongest expectations for understanding ellipsis and detecting unsig- nailed goal changes. A. Derived Goals/Actions --- ~~ and Focused Plans The inference rules to infer a derived goal or action from an immediate goal are based upon shared knowledge concerning the roles and capabil- ities of the speaker and the system. They represent compilations of some plan-recognition rules, including those responsible for indirect speech act interpretation [Sidner&Israel,l981]. [Allen,1980], The following are a few of the inference rules for producing derived goals/actions: [II] If the speaker wants to know the x:P(x) that comprise the possible choices for the parame- ter in a subaction specified by the speaker, then the speaker may want to perform an action whose plan contains that subaction. Example take?" : "What Science course must I [12] If the speaker wants to know the value of x, and x is a term in a precondition or subac- tion of a plan, then the speaker may want to perform the action represented by that plan. Example: "What are the prerequisites of History 304?" [IS] If an the speaker wants to know how to achieve effect, then the speaker's goal may be to achieve that effect. Example: "How do I become A Computer Science major?” Focused plans are constructed by relating the derived goals/actions to the domain-dependent set of plans. Candidate focused plans are produced by the following heuristics, in which DERIVED is a derived goal or derived action. [Fl] If DERIVED is an action and there is no plan for that action, then candidate focused plans are any plans which include DERIVED. [IQ] If DERIVED is an action and there is a plan for that action, then the candidate focused plan is that plan. [F3] If DER IVED is a true predicate which is a precondition in a plan or the effect of an action in a plan, then that plan is a candi- date focused plan. [F4] If DERIVED is an unsatisfied predicate which is the effect of a plan , then that plan is a candidate focused plan. B. Examples - Consider the query "Do I have credit for French 112?" The immediate goal is Knowif(Agent,Earned-Credit(Agent,Frenchll2)). One possible derived goal of the agent is that the agent have credit for French 112. However, as Allen points out [Allen,1980], one may ask if x is true when one wants x to be false, as in the query "Am I on probation?" . Thus from the immediate goal of knowing if x is true, the inference rules produce the two derived goals: Dl. Earned-Credit(Agent,Frenchll2) D2. Not-Earned-Credit(Agent,Frenchll2) If the agent has credit for French 112, then rule F3 applies to derived goal Dl and no rule applies to derived goal D2. Derived goal Dl is the effect of the action Earn- Credit(Agent,Frenchl12) in the plan for Satisfy- Language(Agent); therefore the plan for Satisfy- Language(Agent) becomes a candidate focused plan. If the agent does not have credit for French 112, then rule F4 applies to derived goal Dl and rule F3 applies to derived goal D2. The plan for Earn-Credit(Agent,Frenchll2) has Dl as its effect; therefore rule F4 produces this plan as a candi- date focused plan. The predicate Not-Earned- Credit(Agent,Frenchl12) is a precondition in the plan for Earn-Credit(Agent,Frenchll2). Therefore rule F3 applied to derived goal D2 again produces Earn-Credit(Agent,Frenchll2) as a candidate focused plan. 60 IV. CONTEXT MECHANISM -- Two different forms of context processing are necessary. The first constructs a context model at the start of a dialogue or when the speaker terminates the current dialogue and pursues an entirely new task. The second type of context pro- cessing updates the context model as the dialogue continues. A. Hypothesizing Initial Context - -- The immediate goal is extracted directly from the first utterance, the derived goals are obtained from the inference rules of the previous section, and the focused plan is computed from the heuristics of the previous section. The focused plan represents the local con- text. If only one focused plan exists and it is a plan for an action that appears in only one higher-level plan, then this higher-level plan forms part of a global context assumed obvious between speaker and hearer. This global context is built until a choice of higher-level plans must be made. The initial context tree contains this global context, if any, the focused plan, and the derived goal/action, all of which are marked as active constituents. wise the speaker must reintroduce this topic at a later time. In addition, the speaker will choose to continue with the current topic before switch- ing back to a previous one. based TRACK's heuristics for on similar principles: context processing are [l] a user will generally obtain all desired information about the currently focused task and the most recently considered subaction before considering other tasks [21 the path of currently active plans forms a stack of potential focused tasks to which the user may return. TRACK uses the first applicable heuristic in the following set to select an appropriate focused plan [HI 1 b-I21 b31 b41 [WI 9-w and adjust the context tree. If a breadth-first expansion of the most recently considered subaction within the current focused plan in a context tree includes a candidate focused plan, select the candidate focused plan that occurs earliest and expand the context tree to include it. If a breadth-first expansion of the other actions in the current focused plan in a con- text tree includes a candidate focused plan, select the candidate focused plan that occurs earliest and expand the context tree to include it. If an expansion of a plan in the active por- tion of a context tree includes a candidate focused plan, select that candidate focused plan. If more than one candidate focused plan meets this criteria, select the candi- date focused plan that is a descendant of the plan deepest in the active portion of the context tree. Expand the active portion of the context tree to include the selected focused plan. If an expansion of a candidate focused plan includes the plan for the root of a context tree, select that candidate focused plan, form a new context tree for it, and expand that tree to include the old context tree. If an expansion of another plan includes both a current context tree and a candidate focused plan, select that candidate focused plan, form a context tree for it, and expand these two context trees upward until they meet as subtrees of the same higher-level plan. If none of the above apply, the speaker either has incorrect beliefs regarding hove to achieve his goals or has begun planning for an entirely new and unrelated goal. V. EXAMPLES - Consider the following sequence of three requests. In each illustration of a context tree, the current focused plan is preceded by an aster- isk and active constituents by the letter A. Derived and immediate goals are not shown. [II "Can I obtain a Computer Science major?" This utterance produces two potential context trees, a 1 A* Satisfy-Dept(Agent,CS,BA) and a A* Satisfy-Debt(Agent,CS,BS) bl "What are my choices for the foreign language course I must take?" The plan for the action Satisfy- Language(Agent) contains the disjunction of the two actions l)Earn-Credit in an inter- mediate level foreign language course and 2) Pass-Skills-Test in a foreign language. Inference Rule 11 and heuristic F2 produce Satisfy-Language(Agent) as a candidate focused plan. Heuristic H5 applies to this candidate focused plan and the first context tree in Example 1. (The BS degree does not have a foreign language requirement.) The system should inform the user that it assumes he is pursuing the BA degree. The new con- text tree is shown in Figure 1. Figure 1. The context tree in Example 2 [3] "What are the prerequisites of French 112?" The term prerequisites-of(Frenchll2) appears in a precondition of three plans; each becomes a candidate focused plan. The action represented by one of these candidates, EARN-CREDIT(Agent,Frenchll2), is a descendant of the current focused plan and heuristic HI applies. The new context tree is shown in Figure 2. Each of the above is a working example from the TRACK system. VI. RELATED WORK - -- Allen[1980] inferred the speaker's goal and plan in order to produce helpful responses to indirect speech acts. The goals were either MEET TRAIN or BOARD TRAIN and the plan for each con- sisted of only a few primitive actions. His plan recognition mechanism inferred a complete plan from the speaker's utterance. In more complex domains, the speaker's complete plan consists of a hierarchy of subplans and subgoals. Such a com- plete plan is not immediately evident; further- more, the speaker's current goal within such a plan changes during the course of a dialogue. The TDUS system acts as an expert guiding an apprentice in the assembly of an air compressor[Robinson et al.,lWO]. Grosz[1977] developed the concept of a focus space hierarchy to represent those objects upon which the atten- tion of the dialogue participants was centered. Her system tracked the shifting focus of the apprentice and expert and was used to determine the referents of definite noun phrases in a dislo- gue. Robinson[l981] constructed a model of the actions and goals of the apprentice as inferred from the dialogue. The model contained a goal- action tree which represented execution of the task and differentiated between background goals and the goal/action currently focused upon by the apprentice. In contrast with such task-execution domains, the user in our environment is seeking information in order to formulate a plan for subsequent execu- tion. The information domain is relevant to many diverse plans and the user's overall task-related goal is not obvious at the start of a dialogue. Since the plan is not actually executed during the dialogue with the system, the user's utterances are not as tightly constrained by the structure inherent in the plan as are the utterances in the apprentice-expert task dialogue. The user may investigate several low-level subgoals which could be part of many higher-level plans and only later relate them as components of a specific plan. Natural language understanding requires that enough of the plan structure be built to represent the speaker's communicated plans and goals and that the system track the speaker's focus of attention within this plan structure. Reichman[l981] investigated social eonversa- tions and represented a participant's model of the 1 A* Earn-Credit(Agent,Frenchll2)1 Figure 2. The context tree produced in Example 3 62 discourse as a hierarzhial structure of "context spaces" with associated focusing information. Mann, Moore, and Levin[1977] designed a model of human language interaction. Their system analyzed and structured dialogues according to linguistic goals, such as "Seek-Permission" or "Describe-Problem", not task goals. VII. LIMITATIONS AND FUTURE WORK I_-- The TRACK system has been implemented for a domain consisting of a subset of the courses, requirements, and policies for students at a University. The system is presented with a logi- cal representation of the literal interpretation of a user's query and returns an updated context model. There are five areas for future work: [II c21 [31 [41 [51 The system mus t be extended sively defined plans. to handle recur- The system currently presumes that the user will seek information in a relatively coherent, organized manner. This restriction must be removed and provision made for stack- ing and later connecting sequences of utter- ances that at first appear unrelated. Certain utterances, such as "Is CSlO5 offered at night?" express user preferences. The system should be extended to infer and represent such preferences in a user model. This model could then be used to produce helpful responses that address the particular user's desires. Since the speaker may reconsider or refer back to an old deactivated goal, there must be heuristics for detecting this and merging new and old plans. changing task-level goals of a speaker during the course of a dialogue. It allows a complex set of domain-dependent plans, forming a hierarchial structure of component goals and actions. The system captures the generality of an inferred lower-level goal as a distinct entity within higher-level plans and builds the user's plan as the dialogue progresses. This eliminates the need for working with many separate complete plans at once. TRACK maintains both a local and a global plan context and differentiates between past goals and goals currently pursued by the user. TRACK is part of a project to develop a robust natural language interface [Sondheimer&Weischedel, 1980]. ACKNOWLEDGMENTS I would like to thank Ralph Weischedel for his encouragement and direction in this research and for his suggestions on the style and content of this paper. REFERENCES Allen, J.F. and C.R. Perrault, "Analyzing Inten- tion in Utterancesn, Artificial Intelligence 15,3,1980 Birnbaum, L., "Argument Molecules: A Functional Representation of Argument Structure", Proc. AAAI, August 1982 Cohen, R., "Investigation of Processing Strategies for the Structural Analysis of Arguments", Proc.19th Annual Meeting of ACL, June 1981 Fikes, R.E. and N.J. Nilsson, "STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving", Artificial Intelligence 2, 1971 Grosz,B.J., "The Representation and Use of Focus in a System for Understanding Dialogs", Proc. of the IJCAI, Pittsburgh,Pennsylvania, 1977 Mann, W.,J. Moore, and J. Levin, "A Comprehension Model for Human Dialogue", Proc. of the IJCAI, Cambridge, Massachusetts, Aug.,1977 McKeown,K.R., "The Text System for Natural Language Generation: An Overview", Proc. of the 20th Annual Meeting of the ACL, Toronto, Ontario, Canada, June,1982 Perrault,C.R. and J.F.Allen, "A Plan-Based Analysis of Indirect Speech Acts", American Jour- nal of Computational Linguistics, July 1980 Reichman,R., "Conversational Coherency", Cognitive Science ~01.2, 1978 Robinson,A.E., "Determining Verb Phrase Referents in Dialogs", American Journal of Computational Linguistics, Jan. 1981 Robinson,A.E., Appelt,D.E., Grosz, B.J., Hendrix, G.G., and Robinson,J.J., "Interpreting Natural- Language Utterances in Dialogs about Tasks", Technical Note No.210, SRI International, Menlo Park, California Sidner, C.L. "Focussing for Interpretation of Pro- nounslt, American Journal of Computational Linguis- tics, October 1981 Sidner, C. and D.Israel, "Recognizing Intended Meaning and Speakers' Plans", Proc. IJCAI,Aug.l981 Sondheimer, N. and R.M. Weischedel, "A Rule-Based approach to Ill-Formed Input", Proc. 8th Interna- tional Conf. on Computational Linguistics, 1980 63
1983
29
221
DIAGNOSIS VIA CAUSAL REASONING: PATHS OF INTERACTION AND THE LOCALITY PRINCIPLE’ RANDALL DAVIS The Artificial Intelligence Laboratory Massachusetts Institute of Technology 545 Technology Square Cambridge, MA 02139 Abstract interest has grown recently in developing expert systems that reason “from first principles”, i.e., capable of the kind of problem solving exhibited by an engineer who can diagnose a malfunctioning device by reference to its schematics, even though he may never have seen that device before. In developing such a system for troubleshooting digital electronics, we have argued for the importance of pathways of causal interaction as a key concept. We have also suggested using a layered set of interaction paths as a way of constraining and guiding the diagnostic process. We report here on the implementation and use of these ideas. We show how they make it possible for our system to generate a few sharply constrained hypotheses in diagnosing a bridge fault. Abstracting from this example, we find a number of interesting general principles at work. We suggest that diagnosis can be viewed as the interaction of simulation and inference and we find that the concept of locality proves to be extremely useful in understanding why bridge faults are difficult to diagnose and why multiple representations are useful. 1. ItJTRODUCTION Interest has grown recently in the development of expert systems that reason “from first principles”, i.e., from an understanding of the structure and function of the devices they are examining. This approach has been explored in a number of domains, with the “devices” ranging from the gastro-intestinal tract [6], to transistors [l] and digital logic components like adders or multiplexors [3,5]. Our work has focused on the last of these, attempting to build a troubleshooter for digital electronic hardware. By reasoning from first principles, we mean the kind of skill exhibited by an engineer who can troubleshoot a device by reference to its schematics, even though he may never have seen that particular device before. To do this we require something more than a collection of empirical associations specific to a given machine. We will see that the alternative mechanism has a degree of machine independence and is revealing for what it indicates about the nature of the diagnostic process. We have previously proposed the use of a layered set of models as a mechanism for guiding diagnosis [2,3]. Here we describe the implementation of that idea and demonstrate its utility in diagnosing a bridge fault. We then abstract from this example to consider why bridge faults are difficult to diagnose and why multiple representations are useful. This results in a number of * This report describes research done at the Artificial intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory’s Artificial intelligence research on electronic troubleshooting is provided in part by a grant from the Digital Equipment Corporation. observations about the nature of diagnostic reasoning and the selection and design of representations. 2. CENTRAL CONCERNS Four issues are of central concern in this paper. We describe them here briefly, enlarging on them in the remainder of the paper. j- Diagnosis can be accomplished via the interaction of simulation and inference. Given knowledge of the inputs to a device and an understanding of how it is supposed to work, we can generate expectations about its intended behavior. Given observations about its outputs, we can generate conclusions about its actual behavior. Comparison of these two, in particular differences between them, provides the foundation for our troubleshooting. t Paths of causal interaction play a central role in diagnosis. An important part of the knowledge about a domain is understanding the mechanisms and pathways by which one component can affect another. We argue that such models of interaction are more fundamental than traditional fault models. t One technique for dealing with the complexity of diagnosis is layering the paths of interaction. To be good at hardware diagnosis, we need to handle many different kinds of paths of interaction. But this presents a problem: includinq all of them destroys our ability to discriminate among potential candidates, yet omitting any one of them makes it impossible to diagnose an entire class of faults. In response, we suggest the simple expedient of layering the models, using the most restrictive first and falling back on less restrictive models only in the face of contradictions. 7 The concept of locality proves to be a useful principle in both diagnosis and the selection of representations. We find that the concept of locality, or adjacency, helps to explain why bridge faults are difficult to diagnose: changes small and local in one representation are not necessarily small and local in another. We discover that locality can be defined by reference to the paths of interaction and find that the utility of multiple representations arises in part from the different definitions of locality they offer. 3. BACKGROUND If we wish to reason from knowledge of structure and behavior, we need a way of describing both. We have developed representations for each of these, described in more detail elsewhere [3,4]. We limit our description here to reviewing only those characteristics of our representations important for understanding the example in Section 4. The basic unit of description is a module, similar in spirit From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. to the notion of a black box. Modules have ports, the places through which information enters and leaves the module. 3.1 Functional Organization, Physical Organization By structure we mean information about the interconnection of modules. Roughly speaking, it is the information that would remain after removing all the textual annotation from a schematic. Two different ways of organizing this information are particularly relevant to machine diagnosis: the functional view gives us the machine organized according to how the modules interact; the physical view tells us how it is packaged. We thus prefer to replace the somewhat vague term “structure” by the more precise terms functional organization and physical organization. In our system every device is described from both perspectives, producing two distinct (but interconnected) descriptions. Both descriptions are hierarchical in the usual sense: modules at any level may have substructure. An adder, for example, can be described by a functional hierarchy (adder, individual bit slices, half-adders, primitive gates) and a physical hierarchy (cabinet, board, chip). The two hierarchies are interconnected, since every primitive module appears in both: a single xor-gate for example, might be both functionally part of a half-adder, which is functionally part of a single bitslice of an adder, etc., and physically part of chip E67, which is physically part of board 5, etc. Cross-link information for primitive modules is supplied by the schematic; additional cross-links can be inferred by intersection (e.g., the adder can be said to be on board 3 because all of its primitive components are in chips on board 3). 3.2 Describing Behavior We define behavior in terms of the relationship between the information entering and leaving a module, and describe it by writing a set of rules. A complete specification of a module, then, includes its structural description as outlined above and a behavior description in the form of rules interrelating the information at its ports. As we have noted elsewhere [3,4], we use rules that capture two distinctly different forms of knowledge: simulation rules model the electrical behavior of a device, while inference rules capture the reasoning we can do about it. As a simple example, consider the behavior of an OR gate. The device simulation rule is’ If either input is a 1, then the output is 1, else the output is 0 One of the device inference rules is Iftheoutputis 0,thenboth inputsmusthavebeen 0. Since the device is electrically unidirectional, it is clear that only the first rule can be modeling physical causality. The second rule, and the inference rules in general, capture conclusions we can make about the inputs of the device given its output. This approach to describing behavior is very simple, but has nevertheless provided a good starting point for our work. 3.3 Troubleshooting In previous papers [2,3] we outlined a progression of 1. This has &en rendered in English to make it clear; for an example of the internal syntax see 13). techniques that have been used in automated reasoning about circuits. We discussed test generation and argued that it handles only part of the problem, because it requires that we choose a part to test and specify how it might be failing. We then described discrepancy detection, showing how it offered important advantages. But in examining cases involving a bridge fault or power failure, we discovered that straightforward use of discrepancy detection seemed unable to generate the appropriate candidates. We argued that the problem lay in distinguishing carefully between the machinery we use for solving problems and the knowledge that we give that machinery to work with. 3.3.1 Discrepancy Detection and Candidate Generation Since understanding both the strengths and limitations of discrepancy detection is important in the remainder of this paper, we review the technique briefly. Consider the simple example shown in Figure 1. Figure 1 - Simple troubleshooting example Assume that the actual device yields a 0, producing a discrepancy between what our simulation rules predicted and what the device produced. We begin the process of generating plausible candidates --- devices whose misbehavior can explain the symptoms --- by asking why we expected a 1 at the output. There are three reasons: we expected that the OR gate was working, we expected INPUT-A to be 0, and INPUT-B to be 1. Assuming that there is a single point of failure, one of these expectations must be incorrect. If the first expectation is incorrect, then the OR gate is failing, hence we can add that to our candidate list. If one of the other expectations is incorrect, the OR gate is working and the problem lies further back. But if the OR gate is working, the inference rules about it are valid. In this case the inference rule shown earlier would indicate that both inputs must have been 0. This matches our second expectation (INPUT-A = 0), so there is no discrepancy and thus no need to explore this expectation further. That is, the devices “upstream” of INPUT-A may or may not be completely free of faults, but under the current set of assumptions (made explicit below), none of them can be responsible for the observed misbehavior. There is a discrepancy between our inference and the third expectation, since we expected a 1 from the AND gate. We proceed now with the AND gate just as we did with the OR gate, asking why we expected a 1, adding the gate to our list of candidates and pushing the inferred values yet further back in the circuit. We describe this style of diagnostic reasoning as the interaction of simulation and inference. Simulation generates expectations about correct behavior based on inputs and knowing how devices work (the device simulation rules). Inference 89 generates conclusions about actual behavior based on observed outputs and device inference rules. The comparison of these two, in particular differences between them, provides the foundation for our troubleshooting and has produced a system with a number of advantages. It is, first of all, fundamentally a diagnostic technique, since it allows systematic isolation of possibly faulty devices. Second, since it defines failure functionally, i.e., as anything that doesn’t match the expected behavior, it can deal with a wide range of faults, including any systematic misbehavior. Third, while we have illustrated it here at the gate level, the approach also allows natural use of hierarchical descriptions, a marked advantage for dealing with complex structures (see, e.g., [3]). Finally, the technique also yields symptom information about the malfunction. For example, if the OR gate is indeed the culprit, then we know a little about how it is misbehaving: it is receiving 0 and 1 and producing 0. This utility of this information is demonstrated below. 3.3.2 Mechanism and Knowledge While this mechanism --- the interaction of simulation and inference --- is very useful, it is only as powerful as the knowledge we supply. Recall that in the example above, when exploring the cause of the discrepancy on IN PUT-B, we looked only at the AND gate. Why didn’t we think that some other module, like the inverter, could have produced the problem there? The answer of course is that there is no apparent connection between them, hence no reason to believe one might affect the other. Note carefully the character of this assumption: it concerns the existence of causal pathways, the applicability of a particular model of interaction. We saw no way in which the inverter could affect INPUT-B, yet a pathway is clearly plausible 1-1 a bridge fault, for example. We were implicitly assuming that there was no such pathway. We believe that the important focus in this work is understanding such assumptions and the nature and character of the pathways. This understanding is crucial to candidate generation: given a discrepancy noticed at some point in the device, candidate generation attempts to determine which modules could have caused the problem. To answer the question we must know by what mechanisms and pathways modules can interact. Without some notion of how modules can affect one another, we can make no choice, we have no basis for selecting any one module over another. In this domain the obvious answer is “wires”: modules interact because they’re explicitly wired together. But that’s not the only possibility. As we saw, bridges are one exception; they are “wires” that aren’t supposed to be there. But we also might consider thermal interactions, capacitive coupling, transmission line effects. etc. Generating candidates, then, is not done by tracing wires, it is done by tracing paths of causality. Wires are only the most obvious pathway. In fact, given the wide variety of faults want to deal with, we need to consider many different pathways of interaction. And that leaves us on the horns of a classic dilemma. If we include every interaction path, candidate generation becomes indiscriminate --- there will be some (possibly convoluted) pathway by which every module could conceivably be to blame. Yet if we omit any pathway, there will be whole classes of faults we will never be able to diagnose. The key appears to lie in the models of interaction: we suggest that the difficult and important work is thetr enumeratron and careful organization. We get a hint about organization from what a good engineer might do when faced with the dilemma above: make a number assumptions to simplify the problem, making it tractable, but be prepared to discover that some of those assumptions are incorrect. In that case, surrender them and solve the problem again with fewer simplifications. This leads to the suggestion of layering the models. We start the diagnosis with the most restrictive model, the one that considers the fewest paths of interaction, and only use less restrictive models if this one fails. By “fail” we mean that we reach an intractable contradiction: given the current model and set of assumptions, there is no way to account for the observed behavior. This approach permits us to simplify the problem in order to get started, but does not prevent us from exploring more complex hypotheses. A plausible guess at an ordering for the models might be* * localized failure of function (e.g., stuck-at on a wire, failure of a RAM cell) * bridges * unexpected direction (inputs acting as outputs and driving lines) * multiple point of failure * timing errors * assembly error + design error In terms of the dilemma noted above, the models serve as a set of filters. They restrict the categories of paths of interaction we are willing to consider, thereby preventing the candidate generation from becoming indiscriminate. But they are filters that we have carefully ordered and consciously put in place. If we cannot account for the observed behavior with the current filter in place, we remove it and replace it with one that is less restrictive, allowing us to consider additional categories of interaction paths. 4. LAYERS OF INTERACTION EXAMPLE: DIAGNOSING A BRIDGE FAULT In this section we show how our system diagnoses a bridge fault, illustrating the utility of layering the interaction models. There is, alas, a large amount of detail involved in working through this example. Where possible we have abstracted out much of it, but patience and a willingness to read closely will still be useful. A simple roadmap of the example will help make clear where we’re going. The device is a 6-bit adder that displays an incorrect result in Test Tl. The candidate generation mechanism outlined earlier produces a set Sl of three sub-components of the adder that can account for the misbehavior. A second test T2 is run to distinguish among the three possibilities in Sl. Candidate generation produces a set S2 of two candidates capable of explaining the results of T2. Surprisingly, the intersection of Sl and S2 is null. We have reached a contradiction: no single component is capable of explaining all the data. 2. For the rationale behind this ordering see [2]. 90 Put slightly differently, we have a contradiction under the current set of assumptions and interaction models. We therefore have to surrender one of our assumptions and use a less restrictive model. The next model in the list --- bridge faults --- surrenders the assumption that the structure is as shown in the schematic and considers one additional interaction path: wires between adjacent pins. Surrendering the assumption that the schematic is correct only indicates that we know what the structure is not; the difficult problem is generating plausible hypotheses about what it is. Knowledge of electronics offers insight into how the physical modification --- adding a wire --- manifests itself functionally. This provides us with a behavior pattern characteristic of bridges that can be used to hypothesize their location. Physical adjacency then provides a strong additional constraint on the set of connections which might be plausible bridges. The combined requirement of functional and physical plausibility results in the generation of only a very few carefully chosen bridge hypotheses. The first attempt to apply these ideas produces two hypotheses that are plausible functionally, but prove to be implausible physically. Dropping down a level of detail in our description reveals additional bridge candidates, two of which prove to be physically plausible as well. Further tests determine that one of them is in fact the error. 4.1 The Example Consider the six bit adder shown in Fig. 2. Assume that the attempt to add 21 and 19 produces 36 rather than the expected value of 40. Invoking the candidate generation process described above, we would find that there are three devices whose individual malfunction can explain the behavior (SLICE-l, A2 and SLICE-2).3 Figure 2 - Six bit adder constructed from single bit slices. Heavy linea indicate components implicated as possibly faulty. 3. The example hag been simplified slightly for prt?ShatiOn. A good strategy when faced with several candidates is to devise a test that can cut the space of possibilities in half. In this case changing the first input (21) to 1 will be informative: if the output of SLICE-2 does not change (to a 0) when we add 1 and 19, then the error must be in either A2 or SLICE-2.4 As it turns out, the result of adding 1 and 19 is 4 rather than 20. Since the output of SLICE-2 has not changed, it appears that the error must be in either A2 or SLICE-2. But if we invoke the candidate generator, we discover an oddity: the only way to account for the behavior in which adding 1 and 19 produces a 4 is if one of the two candidates highlighted in Fig. 3 (84 and SLICE-4) is at fault. Figure 3 - Components indicated as possibly faulty by the second test. Therein lies our contradiction. The only candidates that account for the behavior of the first test are those in Fig. 2; the only candidates that account for the second test are those in Fig. 3. There is no overlap, so there is no single candidate that accounts for all the observed behavior. Our current model --- the localized failure of function --- has thus led us to a contradiction.5 We therefore surrender it and consider the next model, one that allows us to consider an additional kind of interaction path --- bridging faults. The problem now is to see if there is some way to unify the test results, some way to generate a single bridge fault candidate that accounts for all the observations. Much of the difficulty in dealing with bridges arises because they violate the rather basic assumption that the structure of the device is in fact as shown in the schematic. But admitting that the structure may not be as pictured says only that we know what the structure isn’t. Saying that we may have a bridge fault narrows it to a particular class of modifications to consider, but the real problem here remains one of making a few plausible conjectures about modifications to the structure. Between which two points can we insert a wire and produce the behavior observed? 4. The generation of tests in this paper is currently done by hand; everything else is implemented. Work on automating test generation IS in progress [A. The logic behind this test is as follows: if the malfunctioning component really were SLICE-i, then the both A2 and SLICE-2 would be fault.free (the single fault assumption). Hence the output of SLICE-2 would have to change when we changed one of its inputs. (Notice, however, if the output actually does change, we don’t have any clear indication about the error location: SLICE-P, for example, might still be faulty.) 5. Note that dropping down another level of detail in the functional description cannot help resolve the contradictron, because our functional description is a tree rather than a graph: in our work to date, at least, no component is used in more than one way. 91 To understand how we answer that question, consider what we have and what we need. We have test results, i.e., behavior, and we want conjectures about modification to structure. The link from behavior to structure is provided by knowledge of electronics: in TTL, a bridge fault acts like an and-gate, with ground dominating.s From this fact we can derive a simple pattern of behavior indicative of bridges. Consider the simple example of Fig. 4 and assume that we ran two tests. Test 1 produced one candidate, module A, which should have produced a 1 but yielded a 0 (the zero is underlined to show that it is an incorrect output). Module B was working correctly and produced a 0 as expected. In Test 2 this situation is exactly reversed, A was performing as expected and B failed. The pattern displayed in these two tests makes it plausible that there is a bridge linking the outputs of A and B: in the first test the output of A was dragged low by B, in the second test the output of B was dragged low by A. TEST1 TEST 2 El-O 0 - Figure 4 - Pattern of values indicative of a bridge. Heavy lines indicate candidates. We have thus turned the insight from electronics into a pattern of values on the candidates. It is plausible to hypothesize a bridge fault between two modules A and B from two different tests if: in test 1, A produced an erroneous 0 and B produced a valid 0, and in test 2, A produced a valid 0 while B produced an erroneous 0. Note that this can resolve the contradiction of non-overlapping candidate sets: it hypothesizes one fault that involves a member of each set and accounts for all the test data. Thus, if we want to account for all of the test data in the original problem with a single bridge fault, we need a bridge that links one of the candidates from the first test (SLICE-l, A2, SLICE-2) with one of the candidates from the second test (84, SLICE-4) and that mimics the pattern shown in Fig. 4. Fig. 5 shows the candidate generation results from both tests in somewhat more detail.7 In that data there are two pairs of devices that match the desired pattern, yielding two functionally plausible bridge hypotheses: Dotted line X, bridging wire A2 to the sum output of SLICE-4; Dotted line Y, bridging the carry output of SLICE-2 to the sum output of SLICE-4. 5. This is in fact an oversimplification, but accurate enough to be useful. In any case, the point here is how the information is used; a more complex model could be substituted and carried through the rest of the problem. 7. As indicated earlier, the candidate generation procedure can indicate for each candidate the values that would have to exist at its ports for that candidate to be the broken one. For example, for SLICE-1 to be at fault in test 1, it would have to have the three inputs shown, with its sum output a zero (as expected) and its carry output also a zero (the manifestation of the error, underlined). TEST1 0 1 h i 1 1 A2 1-o 0 fl A2 o-o 0 \O Figure 5 - Candidates and values at their ports. But the faults have to be physically plausible as well. For the sake of simplicity, we assume that bridge faults result only from solder splashes at the pins of chips.e To check physical plausibility, we switch to our physical representation, Fig. 6. Wire A2 is connected to chip El at pin 4 and chip E3 at pin 4; the sum output of SLICE-4 emerges at chip E2, pin 13. Since they are not adjacent, the first hypothesis is not physically reasonable. Similar reasoning rules out Y, the hypothesized bridge between the carry-out of SLICE-2 and the sum output of SLICE-4. I - endofA2 II - sum output of SLICE-4 III- carry out of SLla-2 Figure 6 - Physical layout of the board with first bridge hypotheses indicated. (Slices 0, 2, and 4 are in the upper 5 chips, slices 1, 3, and 5 are in the lower 5.) So far we have considered only the top level of functional organization. We can run the candidate generator at the next lower level of detail in each of the non-primitive components in Fig. 5. (Dropping down a level of detail proves useful here because additional substructure becomes visible, effectively revealing new places that might be bridged.) We obtain the components and values shown in Fig. 7. Checking here for the desired pattern, we find that either of the two wires labeled A2 and S2 could be bridged to either of the two wires labeled S4 and C4, generating four functionally plausible bridge faults. 8. Again this is correct but oversimplified (e.g., backplane pins can be bent or bridged), but as above we can introduce a more complex model if necessary. 92 Figure 7 - Candidates at the next level of functional description. Each single bit adder is built from two “half-adders” and an OR gate. (To simplify the figure, only the relevant values are shown.) Once again we check physical plausibility by examining the actual locations of A2, S2, S4, and C4, Fig. 8. g As illustrated there, two of the possibilities are physically plausible as well: A2S4 on chip El and S2S4 on chip E2. Figure 8 - Second set of bridge hypotheses located on physical layout. Switching back to our functional organization once more, Fig. 9, we see that the two possibilities correspond to (X) an output-to-input bridge between the xor gates in the rear half-adders of SLICE-2 and SLICE-4, and (Y) a bridge between two inputs of the xor in the forward half-adders of slices 2 and 4. I IX I Figure g - Functional representation with bridge fault hypotheses illustrated. It is easy to find a test that distinguishes between these two possibilities” : adding 0 and 4 means that the inputs of SLICE-2 will be 1 and 0, with a carry-in of 0, while the inputs of SLICE-4 will both be 0, with a carry-in of 0. This set of values will show the effects of bridge Y, if it in fact exists: the sum output of SLICE-2 will be 0 if it does exist and a 1 otherwise. When we perform this test the result is 1, hence bridge Y is not in fact the problem. Bridge X becomes the likely answer, but we should still test for it directly. Adding 4 and 0 (i.e., just switching the order of the inputs), is informative: if bridge X exists the result will be 0 and 1 otherwise. In this case the result is 0, hence the bridge labeled X is in fact the problem.” 5. PATHS OF INTERACTION; THE LOCALITY PRINCIPLE Two interesting questions are raised by the problem solving used just above. Why are bridge faults difficult to diagnose? Why does the physical representation prove to be so useful? To see the answer, we start with the trivial observation that all faults are the result of some difference between the device as it is and as it should be. With bridge faults the difference is the addition of a wire between two physically adjacent points. Now recall the nature of our task: we are presented with a device that misbehaves, not one with obvious structural damage. Hence we reason from behavior, i.e., from the functional representation. And the important point is that for a bridge fault, the difference in question --- the addition of a single wire --- is not local in that representation. As the comparison of Figs. 8 and 9 makes clear, the new wire connects two points that are adjacent in the physical representation but widely separated in the functional representation. The difference is also not as simple in that representation: if we include in our functional diagram the AND gate implicitly produced by bridge X, we see that a single added wire in the physical representation maps into an AND gate and a fanout in the functional representation (Fig. 10). Figure 10 - Full functional representation of bridge fault X. 9. Note that the erroneous 0 on wire S2 can be in any Of three physical location% because S2 tans out (inside the module it enters on its right). 10. As above, tests are generated by hand. 11. Had both been ruled out by direct test, then we would once again have had a contradictron on our hands and would have had to drop back to consider yet a more elaborate model with additional paths of interaction. 93 This view helps to explain why bridge faults produce behavior that is difficult to envision and diagnose. Bridge faults are modifications that are simple and local in the physical description, but our diagnosis is done using the functional description. Hence the dilemma: The desire to reason from behavior requires us to use a representation that does not necessarily provide a compact description of the fault. This non-locality and complexity should not be surprising, since devices physically adjacent are not necessarily functionally related. Hence there is no guarantee that a change that is small and local in one will produce a change that is small and local in the other. More generally, changes local in one representation are not necessarily local in another. We can turn this around to put it to work for us: Part of fhe art of choosing the right representationfs) for diagnostic reasoning is finding one in which the change in question & local. This explains the utility of the physical representation: it’s the “right” one because it’s the one in which the change is local. But why is locality the relevant organizing principle? We believe the answer follows from two facts: (a) devices interact through physical processes (voltage on a wire, thermal radiation, etc.) and (b) physical processes occur locally, or more generally, causality proceeds locally: there is no action at a distance. To make this useful, we turn it around: The mechanisms (paths) of interaction define locality for us. That is, each kind of interaction path can define a representation. Bridge faults arise from physical adjacency and hence are local in the physical representation. The notion of fhermal adjacency would be useful in dealing with faults resulting from heat conduction or radiation, electromagnetic adjacency would help with faults dealing with transmission line effects, etc. Each of these produces a different representafion, different in its definition of locality. And each will be useful for understanding and diagnosing a category of fault. There is still substantial work to do in enumerating the pathways of interaction, but we seem at least to be asking the right question. It seems to make sense for a wide range of faults and appears to be applicable to other domains as well. When debugging software, for example, the pathways of interaction differ (e.g., procedure call, mutation of data structures), but the resulting perspectives appear to make sense and there are some interesting analogies (e.g., unintended side effects in software are in some ways like bridge faults; there are even faults where the notion of “physical adjacency” is useful in understanding the bug, as in out of bounds array addressing). 6. SUMMARY We seek to build a system. that reasons from first principles in diagnosing hardware failures. We view diagnosis as the interaction of simulation and inference, with discrepancies between them driving the generation of candidates. In exploring this interaction, we find that the concept of paths of causal interaction plays a key role, supplying the knowledge that makes the diagnostic machinery work. But the desire to deal with a wide range of faults seems to force us to choose between an inability to discriminate among candidates and the inability to deal with some classes of faults. In response, we suggest layering the interaction models, using the most restrictive first and hence considering the fewest paths of interaction initially. If this fails to generate a consistent hypothesis, we use the next model in the sequence, one which allows consideration of an additional pathway. We illustrated this approach by diagnosing a bridge fault, sharply constraining the generation of hypotheses by using the PhYSical representation as well as the functional. Finally, we found this to be one example of an important general principle .._ locality --- and discovered that one useful definition of locality is given by the pathways of interaction. Acknowledgments Contributions to this work we made by all of the members of the Hardware Troubleshooting project at MIT, including: Howie Shrobe, Walter Hamscher, Karen Wieckert, Mark Shirley, Harold Haig, Art Mellor, John Pitrelli, and Steve Polit. Bruce Buchanan and Patrick Winston offered a number of very useful comments on earlier drafts. REFERENCES [I] Brown J S, Burton R, deKleer J, Pedagogical and knowledge engineering techniques in the SOPHIE systems, Xerox Report CIS-14, 1981. [2] Davis R, Reasoning from first principles n hardware troubleshooting, /nt/ Journal of Man-machine Studies, to appeW, 1983. [3] Davis R, et al., Diagnosis based on structure and function. Proc AAA/ 1982, pp 137-142, August 1982. [4] Davis R, Shrobe H E, Representing structure and function, /EEE Computer, to appear Sept 1983. [5] Genesereth M, The use of hierarchical models in the automated diagnosis of computer systems, Stanford HPP memo 81-20, December 1981. [6] Patil R, Szolovits P, Schwartz W, Causal understanding Of patient illness in medical diagnosis, Proc WA/-87, August 1981, pp 893-899. [7] Shirley W, Davis R, Digital test generation from symptom information, /EEE 7983 VLS/ Workshop, to appear.
1983
3
222
QE-III: A FORMAL APPROACH TO NATURAL LANGUAGE QUERYING James Clifford Graduate School of Business Administration New York University ABSTRACT -- believe that some such formal theory of a query language is an important first step towards the development of provably correct and reliable natural language processing systems. For inherent in the notion of program "correctness" is the concept of a standard against which a program is to be judged. In [Clifford 19821 we provided a formal definition of the query fragment QE-III as a Montague Grammar. QE-III simplifies the semantic theory of the language presented in [Montague 19731 (known as PTQ), and offers a natural correspondence to the semantics of queries in a database context. It fragment is provided with a formal syntax, semantics, and pragmatics, each component designed with the database application in mind. Among the major extensions to the PTQ fragment embodied in QE-III are the inclusion of time-denoting expressions and temporal operators, an analysis of verb meanings into primitive meaning units derived from the database schema, the inclusion of certain forms of direct questions, and the inclusion of a formal pragmatic component. These extensions, and the interpretation with which they are provided, are motivated by the goal of database access, but they are equally interesting in their own right. The syntactic theory presented is in some cases admittedly naive, for we have been primarily interested in getting the interpretation right. Recent work (e.g. [Gazdar 19811) indicates that broad syntactic coverage can be coupled with a formal semantics. This Paper provides a brief introduction to the work presented in [Clifford 19821, where a small query fragment (QE-III) is rigorously provided with a complete semiotic theory: syntax, semantics, and pragmatics. II. HISTORICAL DATABASES --- In [Clifford & Warren 19831 we showed that a formal semantics can be given to the concept of an historical relational database. The semantics In this Paper overview of QE-III, a we uage sent design:: for natural-language querying of historical databases. QE-III is defined formally with a Montague Grammar, extended to provide an interpretation for questions and temporal reference. Moreover, in addition to the traditional syntactic formal pragmatic interpretation for-the semantic components, a sentences of QE-III is also defined. . INTRODUCTION - Numerous systems for natural language database access have been described in the literature, including [Woods 19721, [Waltz 19761, [Harris 19781, and [Hendrix 19781. While these systems are dissimilar in a number of different respects, they all share to us is the what same defect, namely the lack of any fundamental formal theory of semantics of the database or of-the the English query language. We view the development of these and othe r such systems-a s belonging to the first phase i n the dev elopment of a formal theory of database semantics and of database querying, much as the early years in the desiqn of computer languages such as FORTRAN were of the first phase in the development of a theory of programming language semantics. The birth of programming language theory awaited the impact of formal language theory and a theory of syntax-directed translation. An analogous development in the area of natural language querying would require the impact of formal language theory and a theory that coupled the syntax and the semantics of English. Many linguists today believe that Montague's theory of universal grammar [Montague 1970bl is the first successful attempt at formalizing such a uniform syntactic and semantic theory of natural language. We This material is based on work supported by the National Science Foundation under grant IST-8010834. 79 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. given was analogous to the semantics of the relational database model viewed as an applied first order theory; extending the relational model to an historical database prompted a move to the higher-order language IL-s [Clifford 19821, (with its built-in concept of denotation with respect to an index) in order to provide a formal semantics for such databases in a natural way. Briefly, each "ordinary" relation was extended with a special attribute, "STATE," which served to index the facts recorded by tuples in the relations. The values for this attribute, drawn from a domain of times, effectively *time-stamped each tuple in a manner analogous to the notion of "denotation with respect to an index" which is the principle underlying the "possible world semantics" of formalized intensional logic (see Dowty [1981]). With the notion of a historical database comes the burden of providing an interpretation for queries and commands that make reference (explicit or implicit) to the notion of time. QE-III was designed to provide such an interpretation for such database queries. The interpretation of queries expressed in English is defined formally in terms of the formal semantics of the HDBM. The correlation between the HDBM semantics and this query language is made explicit by interpreting the query fragment via an indirect translation into the same intensional logic IL-s that was used to formalize the HDBM. Through these translations, the model for IL-s that "corresponds" to a particular HDB (in a sense formalized in [Clifford & Warren 19831 also serves as the model for a formal definition of the model-theoretic interpretation of the English queries. In addition to providing a semantic interpretation, which in model-theoretic terms is called its denotation, we also provide for each expression a pragmatic interpretation in a manner to be explained. III. CRITERIA FOR THE -- THEORY In developing the theory of QE-III we were guided by two basic principles. First was that the interpretation or "meaning" of a natural language database query be as close as possible to the interpretation of database queries in, say, the relational algebra or calculus. This meant that the interpretation of a we ry should somehow encompass its answer as represented in the underlying database. Second was the issue of computational tractabliity. This meant taking into account what was known about parsing strategies for Montague Grammars, as well as what database theory had to say about the semantics of the modelled enterprise. This led to the adoption of systematic simplifications to the PTQ translations from English to logic wherever these were suggested by the simplified view of the semantics of the enterprise provided by the database model. Moreover, since we were not attempting to develop a semantic theory of questions for English in general, these simplifications are introduced into the translation process as early as possible. This has the dual effect of making some of the PTQ theory a little more accessible, and eliminating the need to resort to the less computationally attractive technique of introducing a large number of Meaning Postulates and at a later stage using logical equivalences to perform reductions. (An extension of Warren's PTQ parser [Warren 19791 to the QE-III fragment has been implemented by Hasbrouck [1982].) In addition, the following criteria have guided some of our decisions. (1) The theory should fall within the general confines of Montague's framework, i.e., syntax and semantics defined in parallel, with the semantics of a phrase defined compositionally in terms of the semantics of its components. (2) Proper treatment of the interaction of questions and quantifiers; as PTQ successfully accounts for multiple readings of sentences with interacting quantifiers ( "A woman loves every man") , our solution allows for all of the readings of questions involving quantified terms ("Who manages every employee?"). (3) Provision for Y/N questions, WH-questions, temporal questions ("when") , and multiple WH-questions ("Who sells what to whom?"). We have made little attempt to develop a sophisticated syntax for our fragment. Since our primary concern has been "getting the meaning right," we felt that a too broad syntactic coverage might obscure our major points. We believe that the QE-III theory of questions, particularly our proposal to capture the answer in a pragmatic component, are an important contribution to the formalization of the interpretive component of natural language understanding systems. IV. OVERVIEW OF THE LANGUAGE QE-III -- A. Individual Concepts vs. Entities Most recent research in the field of Montague Semantics has incorporated the suggestion, first made by Bennett [1974], that Montague's treatment of common nouns (CNs) and intransitive verbs (IVs) as denoting sets of individual concepts (ICs) is unduly complicated. Under Bennett's suggestion both CNs and IVs denote sets of simple individuals; this simplifies the typing scheme of English categories in these fragments. In [Clifford & Warren 19831 the database concepts of key attributes and role attributes are identified, respectively, with "ordinary" CNs (which reduce to sets of entities in PTQ by means of MP-1) and "extraordinary" CNs (which denote sets of ICs). Accordingly we have not adopted the Bennett type system, but have instead maintained the PTQ treatment. B.Verbs Montague's semantic treatment of verbs leave them completely unanalyzed; thus, for example, the English verb " wa 1 k" translates into the constant "walk"' in IL, "love" into "love"', etc. Because we use a database as a representation of the logical model, we can provide an analysis of English verbs that takes into account the meaning of verbs as encoded in the database. As an example, the translation of "manage" in our fragment is given as: xw x z W(i) ( Y ASSOCWi) ,x1 & EMP(i) (y(i)) & MGR(i)(x)) This expression is of the same logical type as manage' in a PTQ-like treatment, and combines with Terms in the same wayl but it does not leave "managing" unanalyzed. Instead it specifies that its subject x must be an IC that is a MGR, and its object must be an entity that is an EMP, and these two must be ASSOCiated in the database schema. In general the translation of any verb in QE-III specifies the attribute of its subject (or the disjunction of alternatives, if any) l The translation of a TV further specifies the attribute(s) of its direct object, and a DTV of its indirect object. Moreover any relationship(s) among these attributes are specified. C. Tenses Extensions to PTQ have had to handle the issue of tense and its interaction with other components of a sentence. We agree with Dowty's [1979] premise that tense is a property of the clause as a whole, and not merely of the verb. This is particularly important when, as in QE-III, there are different kinds of sentences: declaratives, WH questions, Y/N questions, and WHEN questions. For under a straightforward extension of the treatment of tense in PTQ, the number of rules would proliferate alarmingly, since separate rules would be needed for each kind and tense of sentence formed by conjoining a Term and a VP. For this reason we incorporated into QE-III the additional syntactic categories of tensed sentences of each variety, and h-modified the Subject + Predicate rule (S4 in PTQ) to create an untensed sentence. Additional rules for each tense create the final, tensed version of any sentence. D. Database Questions Numerous researchers have examined the question, "What is an appropriate formal treatment of the semantics of questions?" ([Hamblin 19731, [Karttunen 19771, [Bennett 1977 & 19791, [Belnap 19821, [Hausser & Zaefferer 19791, are among the many who have tried to formulate an answer within a Montague Grammar framework.) We propose in our theory that the proper place for considering the answer(s) to a question is in a separate theory of pragmatics for the language. We have not yet proposed a completely general theory of pragmatics. But we believe that incorporating a formal pragmatic component to our fragment that treats the notion of a response to a question is defensible as at least one component of a theory of language use. Our formalization of a pragmatic component to the theory of QE-III accords well with what Stalnaker [1972] sees as the goals of " a formal semiotics no less rigorous than present day logical syntax and semantics." Those goals, he goes on to say, include an analysis of such linguistic acts as "assertions, commands,..., requests... to find necessary and sufficient conditions for the successful (or perhaps in some cases normal) completion of the act." In its technical details our approach is both simple and elegant. It removes from the semantics the burden of providing an account of the response to a question, and allows it to do what semantics has always done best, account for reference. Then, just as the semantics of a language is based upon its syntax, the pragmatics is based upon both the syntactic and semantic analyses (in Hamblin's [1973] phrase, it "complements syntax and semantics.") The simplicity with which we can state the formal pragmatic rules for our fragment, to capture the notion of the answer to a question, is based upon this ability to use both the syntax and the semantics to build the pragmatic theory. TWO examples must suffice here to illustrate' these ideas. A pragmatic interpretation of YNQs that meets the criteria set forth in section III is not difficult to obtain. Since we want to interpret YNQs as either lrYes" or " No " , they can be defined to denote objects in {0,1). But this is just the denotation set of the corresponding declarative sentence expressing the proposition that the YNQ asks. Thus we easily meet our criteria by providing that a YNQ denote the same proposition as that denoted by the declarative sentence from which it was derived. For example, "John mangages the shoe department" would roughly be translated as: manage' (i) (John, Shoe Dept.) This formula is true with respect to a state i just in case John manages the shoe department in that state. Our analysis of the corresponding question "Does John manage the shoe department?" provides that it is derived syntactically from "John manages the shoe department" and that semantically and pragmatically it denotes the same object in the model. Under this view, then, a formula in the logic essentially "questions" the model as to its truth or falsity in the same way that a YNQ questions the database for the response "yes" or "no." WH-questions in QE-III denote (a semantic concept) just as declarative sentences do. Thus the WH-Question "Who manages whom?" and the declarative sentence "He manages him" both receive the same semantic analysis: x [x(i)=u-2 & EMP(i)(u-1) & MGR(i) (x) & ASSOC(u-1,x)]. Both are treated as denoting the same object with respect to an index, a variable assignment, and a model. But they are interpreted differently in the pragmatics. The pragmatics is defined as a function that, given a derivation for an expression of QE-III together with its syntactic category and its denotation (semantics), returns (possibly) new object in the same model? Thus, although we view pragmatics as a separate component of a language theory, it is closely allied to the semantics -- both provide interpretations of linguistic expressions within the context of the same logical model. The formal defintion of the pragmatic component provides that these two sentences, interpreted pragmatically, denote what the following expressions of IL-s denote: who manages whom? ----> A u-2 h u-1 x [x(now)=u-2 & EMP(now) (u-l) & MGR(now) (x) & ASSOC(u-1,x)] he manages him ----> 3x [x(i)=u-2 b EMP(i) (u-l) & MGR(i) (x) & ASSOC(u-1,x)]. The pragmatic interpretation of the question is the set of n-tuples that answer it, while of the declarative sentence it is the same as its denotation. The pragmatics for QE-III is thus a simple theory of the effects of producing an expression in that language within the assumed context of a question-answering environment. That is, we assume that a user of QE-III is using the language to produce some effect within this context, and it is this effect (a representation of the answer to the question) which we formalize as the pragmatic component of the language definition. V. CONCLUSIONS - --- QE-III is a formal English we ry language for historical databases whose definition is provided in three distinct parts. First we define the syntactic component: the categores of the language, the basic expressions of these categories, and the rules of formation. Together these constitute an inductive defintion of the set of meaningful expressions of QE-III. The semantics of language is presented next, following Montague's general procedure in PTQ. This consists of giving, for each syntactic rule, a corresponding rule of translation into the logic IL-s, for which a direct semantic interpretation has already been specified. Finally, we provide a pragmatics for the language when used in the assumed context of a question-answering system. The pragmatics consists of a set of rules that together define a function which, for any derivation tree of an expression in the language, provides what we call its pragmatic interpretation. - --- QE-III is an attempt to demonstrate that a successful formal treatment can be given to a Natural Language database querying facility, through the medium of a formal intensional logic. We view this work as important for two reasons. First, it represents the first attempt to adapt the ideas of Montague Grammar to a practical problem. Most research since the PTQ paper has either been in the form of extensions or modifications to its linguistic or logical theory, or of computer implementations of the theory. Our work tries to show that this theory of language can serve as the 82 formal foundation of a useable computer system for querying actual database. Second, it represents a change in emphasis in approaching the NLQ problem from the engineering approach -- get as much coverage as possible and get the system to work -- to a more formal approach -- proceed in small steps and develop a formal theory of what you do with each step that you take. This work represents only a first step in this direction within a Montague Semantics framework. The QE-III fragment is certainly not adequate to express all of the queries that one would want to present to an HDB. We hope, however, that it will lay the groundwork for a formal theory of database querying that is both extendible and implementable. ACKNOWLEDGMENTS This research is part of the author's Ph.D. thesis done at SUNY Stony Brook under the direction of David S. Warren; his encouragement and support are gratefully acknowledged. REFERENCES Belnap, Nuel D. Jr. (1982). "Questions and Answers in Montague Grammar," in Processes, Beliefs, and Questions, ed, S. Peters and E. -Saarinen, Reidel Publ. Co., Dordrecht. Bennett, Michael R. (1974). "Some Extensions of a Montague Fragment of English," UCLA Ph.D. dissertation; distributed by Indiana University Linguistics Club, Bloomington. Bennett, Michael R. (1977). " A Response to Karttunen on Questions," Linguistics and Philosophy 1, 279-300. Bennett, Michael R. (1979). "Questions in Montague Grammar," Indiana University Linguistics Club, Bloomington. Clifford, James and D. S. Warren (1983). "Formal Semantics for Time in Databases," ACM Transactions on Database - Systems, 6,2 June 1983. Clifford, James (1982). " A Logical Framework for the Temporal Semantics and Natural-Language Querying of Historical Databases," Ph.D. dissertation, Dept. of Computer Science, SUNY at Stony Brook, Stony Brook. Dowty, David R. (1979). Word Meaning and Montague Grammar, Reidel Publ. Co., Dordrecht. Dowty, David R., R. E. Wall and S. Peters (1981). Semantics, Introduction to Montague Reidei Publ. Co.,Dordrecht. Gazdar, Gerald (1981). "Unbounded Dependencies and Coordinate Structure," Linguistic Inquiry 12. Hamblin, C.L. (1973). "Questions in Montague English," Foundations of Language 10, 41-53. - Harris, Larry R. (1978). "The ROBOT System: Natural Language Applied to Processing Data Base Query," TR78-1, Dept. of Mathematics, Dartmouth. Hasbrouck, Brian L. (1982). "Methods of Parsing English with Context-Free Grammar," Masters Thesis, SUNY at Stony Brook, Stony Brook. Hausser, Roland and D. Zaefferer (1978). "Questions and Answers in a Context-Dependent Montague Grammar," in Formal Semantics and Pragmatics for Natural Languages, ed. F. Guenthner and S.J. Schmidt, Reidel Publ. Co., Dordrecht. Hendrix, G.G., E.D. Sacerdoti, D. Sagalowicz, and J. Sl ocum (1978). "Developing a Natural Language Interface to Complex Data," ACM Trans. on Database Systems 3,2. -- - -- Karttunen, Lauri (1977). "Syntax and Semantics of Questions," Linguistics and Philosophy 1, 3-44. Montague, Richard (1970). "Universal Grammar," Theoria 36, 373-398. -- Montague, Richard (1973). "The Proper Treatment of Quantification in Ordinary English," in Approaches to Natural -- Language, ed. K.J.J. Bordrecht, 221-242. Hintikka et al., Stalnaker, Robert C. (1972). "Pragmatics," in Semantics of Natural Languages, ed. DT David%- and G. -- - Harman, Reidel Publ. Co., Dordrecht. Waltz, David L. et al. (1976) e " An English Language Question Answering System for a Large Relational Database," Commun. ACM 21,7, 526-539. -- - Warren, David Scott (1979). "Syntax and Semantics in Parsing: An Application to Montague Grammar," Ph.D. dissertation, Univ. of Michigan, Ann Arbor. Woods, W.A., R.M. Kaplan, and B. Nash-Webber (1972). "The LUNAR Sciences Natural Language Information System: Final Report," BBN Report 2378, Bolt, Beranek and Newman, Inc., Cambridge MA. 83
1983
30
223
REPAIRING MISCOMMUNICATION: RELAXATION IN REFERENCE* Bradley A. Goodman Bolt Beranek and Newman Inc. 10 Moulton Street Cambridge, MA. 02238 ABSTRACT In natural language interactions listener a speaker and cannot be assured to have beliefs, the same contexts, back rounds or goals. ' leads to difficulties an Lf mistakes when a list%: tries to interpret a speaker's utterance. One principal source of trouble is the description constructed by the speaker to refer to an actual object in the world. imprecise, confused, The description can be ambiguous or overly specific; it might be interpreted under the wrong context. This paper explores the problem of resolving such reference failures in the context of the task of assembling a toy water pump. We are using actual protocols to drive the design of a program that e lays he the part of an apprentice who must interpret A instructions of an expert and carry them out. descriptions is by primary means for the apprentice to repadif such description. relaxing parts the I INTRODUCTION Cons&lld; tFf dialogue below which exemplifies some . utterances. Here c~mf~e%structing B to "a"sesdembie" descriptions pa;t of ",i"co,"uryte;fpump C9, 71. Refer to Figure 1 a the pump. A and B are communicating other. verbally but neither can (The bracketed text in the excerp?eeshiiE what was actually occurring while each utterance was spoken.) Notice the complexity of the speaker's descriptions and tIh.. [;;zl;ant processing required b "the long E the listener. B interprets lue tube,, to refer to the'STAND. When A adds the relative clause "that has two outlets on the side, referent, ,, B is forced to droff the STAND as the to relax the color and to blue,, to ,,violet ,, selec(FthJhe MAINTUBE. description nozzle-looking In. Lir$a 6, A1s too specific and B selects the NOZZLE p&%eadl~f the SPOUT. A's addition of Ifthe clear plastic one,, in Line 7 rules o"ftav;;e oNfoZZLE - which is Lynde ar;: the SPOUT %%%~trate%"a case where A previousl$ focused B's attention on one object and intends to switch that focus to another one. shift focus. In this case, B doesn't This lack of agreement on what is in focus leads to confusion later on in the dialogue. A: 1. Take the long blue tube 2. CB reaches toward STAND] that has two outlets on the side - 2 CB takes MAINTUBE] . that's the main tube. . Place the small blue ca [B takes CAP 5. P over the hole on the side of that tube. [B pushes CAP on OUTLET11 *This research was supported in part by the Defense Advanced Research Project Agency under contract NOOOl4-77-C-0378. 6. Take the nozzle-looking piece, 7. the clear p!~s&!~b&!oZZLE1 8. [B takes SbOUT] and place it on the other hole CB identifies OUTLET2 of MAINTUBE] 18: that's left, so that the nozzle P oints away. B installs SPOUT on OUTLET2 of MAINTUBE] 11. Okay? B: 12. Okay. A: 13. Now take the blue lid t pe f thing B takes 14. TUBEBASE] and screw it onto the bottom [B screws TUBEBASE on MAINTUBE] 15. [~"~%izes he has forrtten to have B put 16. SLIDEVALVE into UTLET2 of MAINTUBE] undo the plastic thin CB removes TUBEBASE bu& A meant the SPOUT] Plunger Nozzle Slide Valve Tube Base Stand Figure 1: The Toy Water Pump 134 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. the needed information. In this paper I will describe the relaxation component of the reference identification module and illustrate some of the sources of knowledge that guide it in relaxing a description. 11 T&E KINDS OF PROBLEMS how Part of my research has been an examinatioon of a listener iil;;;ersan;ha\o; ;;pir description is discovers the source 'of the lis teK: problem in communication. o How the problems are discovered: 1. 2. 3. 4. o Where 1. 2. 3. 4. 5. 6. The li.stecEr finds m Real0 World object correspond the speaker's description; the listene;umyenrds o;theR',aihanWo;;; requested ObJects (i.e., too many or too few); the listener cannot the action s ecified P by iEform speaker because o some obstacle; or the listener perfora$ tr6?saction but zo,;dctnot arrive intended . the problems may reside: In the speaker's description of an object presented in the utterance; in the speaker's description of a physical action presented in the utterance; +& thEa;;t obLepal World objects brought into attention (the speaker's set may differ from the listener's set); with the set of Real World actions that have been brought into attention (the speaker's set may differ from the listener's set); in the interpretation of the yfd:rlying force of the utterance liAte;er doestothesigmPpelaykern~~.t ",;E information in the utterance or to use it to do something); or with the hearer's concentration These observations signal conditions in which a mistake might occur and where it might be found. We will now explore what a listener has available for resolving miscommunication. III KNOWLEDGE FOR REPAIRING DESCRIPTIONS When things go wrong during a conversation, bear to get around the problem (see [ 161E "M'~% :I! people have lots of knowledge that the the time the repairs are so natural that ~;~a;;;;; ;;-II~~OUS that they have taken place. we must make an effort to correct what we have heard, or determine that we need clarification from the speaker. This repair process involves the use of knowledge about conversation, its social conventions and the world around us. In this work, I chose to consider the repairT;E descriptions rather than complete utterances. most relevant knowledge for repair depends on the *I am including this kind of problem because I have been talking about human dialogues. I will not, however, pursue it any further. ;;c);z;rtion itself and the Real World described to l Therth;;e ;;T;;outhgources of knowledge consider reference repair process. We will look at two sources ;I$ percepyfal ;;eowledge. linguistic is use Ling;;;tic knowled:; structure description. Perceptual knowledge %an,l"",erson:z abilities to distinguish feature values preferences in features by considering which'z:ez more important (with respect to the person anttiE; domain), knowledge and one's perception of an object. discourse knowledge knowledge El sources, - such , 14, 18, 17, 15, 2, llf' 1, 13, 31, trial and e&or k!Et edatzc !T * hierarchical' knowledge, and domain knowledge 9 I? will not be covered here. A more detailed treatment can be found in C83. A. Linguistic Knowledge in Reference Different linguistic structures can be utilized to describe objects in the extensional world. This section outlines some of these structures and their meanings and shows how they can be used to guide repairs in the description. A description of an object in the extensional world usually includes enough information about physical features of the object so that listeners can use their perceptual abilities to identify the object. Those physical features are normally specified as modifiers of: nouns and P ronouns. The typical modifiers are aad,JdectiveS, re.ative c!',;?;~: t adjective clauses) pre ositional adjective phrases). P interchangeable, hey are often that is, one could specify a feature using any of the modifiers. One modifier, however, may be better suited for expressing a feature than another. Relative clauses are well suited for expressing complicated information since they are separate from the main part of the noun phrase and can be arbitrarily complex themselves. Assertions of ,,extra" information, information possibly outside the domain tf;;;;~; e f! and not useful for findi,:agntte at this time. (e.g., - shaped tube of clear plastic that & defined as a snout,,). Material useful for confirmin proper referent was found. $ that "KS long blue tube that has two out?it%)on the side,,). description in more de&l respecification the initial For example, in the case of the descriptions "the thing that is flared at the tw" and "the main tube whir& the binnest tube," the relative clauses are needed because the initial descriptions are too vague. Prepositional phrases are better fitted for simpler pieces of information. They are often used to express predicative relationships. superlative relation ' te.~o~p~~~,?~~all%t of the red pieces"), o thesubpart specifi;ation - used to access subpart the ObJect under consideration (e..g., little elbow pint, ,, "the top end fl the "that water chamber with the blue ottom and the globe top">, o Most perceptual features (e.g., "with a clear tint,,, "with a red color,). Just like relative clauses, prepositional can also provide confirmation information. phrases Adjectives are used to express almost E erceptual feature - though complex relations %% e awkward. Usually they modify the noun phrase directly, but sometimes they are expressed as a 135 predicate complement. corn lement describes g (eq., In those situations, the the subject of the linkin ver the rela ive "the tube is large"). As with some o F clauses above have an assertional nature go predicate complements them. B. Relaxi;aea DescriDtion Using Linguistic Know a The relaxation d~~%$?ioion attempts to relax features in the ad'ectives, re ative i! thenaprepositional pht&es and f$%%ii clacUhsoesnand predicate complements. order was by examining the water pump protoco;l.tandp~~ynotlng where the linguistic forms come during reference resolution. Adjectives and .prepositional phrases pla f a more central rol;o~~ile relative clauses.usual y play a secondary referent identification Relative clauses &%i$edicate complements exhibit an assertional nature that reduces their usefulness for resolving the current referencte (w;t;fe;~ the information they express can subsequent references). The head noun can also i: relaxed. It normally is relaxed last but could be relaxed prior to a relative clause (especially in the instances where the relative clause expresses confirmational information). For example, consider the description "the lar e violet cylinder that has two outlets.,, t8e features size, color and sha e Here, are described in the adjectives and head noun o B the two subparts, the description, and function in the relative clause. Following the above rules, the relaxation of size, color and shape should be attempted before either the number of subparts or the subparts, functions. The relaxation order is influenced by the other knowled e hard an 8 sources so the order proposed here is not fast. C. PerceDtual Knowledge in Reference A major factor involved here is how people perceive objects in the world and how this can be simulated in m is denoted t system. two Each objectspfaqtriF forms* p "y;";y representation &d a cognitiv&litguistic form that shows how the system could actually talk about the object. Theofspatt.$ description description is a physical 3%J%~pe~nco~l?~%sng 'ft its dimensions, the basic and its physical features. The cognitive/lingu&tic form is a representation of the parts and features of the object in linguistic terms. It overlaps the visual form in many respects but it is more suggestive of the listener's perceptions. The cognitive/linguistic form describes asDects of an object such a% its subparts by its position on the object ("top", "bottom") and its functionality (,,outlets,, l "Places for attachment,,). More than one cognitive/linguistic form can refer to the same physical description. Some properties of an object differ in how they are expressed in the two forms. In the 3-D form such as ;;~ri;~~icdlmensions (e.g., feet,,) there *are primarillr3 przgr;t$e; cylinders), shapes (e.g,, while, in the cognitive linguistic 7 eneralized form, there are relative dimensions (e.g., ))large,,) and analogical shapes (e.g., "the L-shaped tube >. Perception, hence, may involve interpretation. This can lead to discrepancies between individuals. People usually agree on the spatial representation but not necessaril.y on the cognitive/linguistic &J;;i$.ion and this can lead to problems. For mlsJudgements by the speaker in calling an object trlarge,, can cause the hearer to fail to find an object in the visual world that has dimensions that are perceptually "large" to the listener. To prevent confusion of the listener, a speaker must distin uish from each o her. % the objects in the environment The perceptual features of an object provide people with a way to discriminate one object from another. A speaker must take care when selecting from these features since they can induce their own confusion. Perceptual features ma I be inherently confusing because a feature's va ues are difficult to judge (e.g., is the tube a cylinder or a slightly tapering cone?). They may also be confusing because the speaker and listener may have differing sets of values for a feature (e.g., what be blue turquoise formznother) for someone be These characteF?$tics affect the salience of a feature (see cl211 which in turn determines the feature's usefulness in a A feature that is common in everyday ,licsotleonre)r can readily distinguish the shape and size) is salient feature's possible values from each other. Of course, very unusual values of a feature can stand out, making it even easier to discriminate a unique object from all other objects [12]. The objects in the world may exhibit a feature whose possible values are difficult to distinguish. This occurs when a perceived feature does not have much variability in its range of values: all the values are clustered close1 ;;;d pex,"etl ?-~~~difference Jb together making it etween one value and increases the likelihood of confusion because the usefulness of specifying the feature to a non-expert is diminished if the speaker is more expert than the (es ecially lis ener & in distinguishing feature values). Hence, if one of these difficult feature values appears in the speaker's description, the listener, if he isn't an expert, will often relax the feature value to any of the members of the set of feature values. D. %&axiyea DescriDtion Using PerceDtual w a When examining the features presented in a speaker's description, one can consider perceptual aspects to determine which features are most likely in error. Such an inspection can generate a partial ordering of features for use during the repair procestso t;elde;ermin;s wh;;!wnfe;~l~w inthg description relaxation ordering su l ested features interacts wi f fl by the inspection of other knowledge sources. ordering proposals from Active features are ones that require a listener to do more than simply recognize that a-particular feat;;: value belongms&o ;e;efl;r;f possibl;i;,dalues listener some of evaluation. When considerin 5 the water domain, it seems that one shoul first relax t Ei,P R features that require less active consideration such as color (thou h it is easier to relax red to orange than re f to transparency, sha e blue), tp and function. composition, should one relax Only after this hose features that require active consideration of the object under discussion and its surroundings (such as comparatives, and relative values sulpifi;(relatives, i2g,"j height, thickness, position, size, distance and People tend to be casual with less active feature; whileHekpe active ones require their full attention. in a reference failure the source of the problem is likely to be the less active ones. IV THE RELAXATION COMPONENT I have discussed some of the numerous kinds of knowledge available to a listener to interpret a s eaker s description. &at knowledge affects the listener's abilit I pointed out places whe;: interpret a description and ways in which 1 is 3 helpful to the listener for descriptions. overcoming properly a When a description fails to de%%: referent in the Real World, it is possible to repair it by a relaxation process that drops or modifies parts of the description. Since a description can specify many features of an object, the order in which parts of it are relaxed is crucial. There are several kinds of relaxation possible. One can ignore a constituent, replace it with something close, replace it with a related *For example for many certain Eskimo languages have names different grades of snow that ma difficult for most non-Eskimos to distinguish r19P 136 value, 3 roup and change focus (i.e., consider a difIfe;zyi of objects.). In this section, escribe the overall relaxation corn onent that draws on the knowledge sources as it ! tr es to relax the errorful description to one that suffices. A. Find a Referent Using a Reference Mechanism Identifying the referent of a description entails finding an element in the world that corresponds to the speaker's description (where ,,described by,, means every feature specified in the description is present in the element in the world but not necessarily vice versa>. The initial task of our reference mechanism is to determine whether or not a search of the (taxonomic) knowledge base+ is necessary. A number of aspects of discourse m-a Y atics can be used in that determination but I wil not examine them here. If a search of the knowledge base is considered necessary, then the reference search mechanism is invoked. The search mechanism uses the KL-One Classifier [lo] to search** the knowledge base taxonomy. The Classifier uses the subsumption relationships inherent in the taxonom 7 descripFiyh i;estphe;tcorrect spot Cl0 ! to place the What this means to reference is that the possible referents of the description will be found below the description after it has been classified into the knowledge base taxonomy. If more than one referent is below the classified description, then, unless a quantifier in the description specified more than one element the speaker's description is ambiguous. If one description is below it, then the inteFnid,eadll&eferent is assumed to have been found. classified deicription, if no referetetn istfhOeUnd below the relaxation component is invoked. B. Collect Votes For or Against Relaxing the DescriDtion It is necessary to determine whether or not the lack of a referent for a description has to do with the description itself - i.e., reference failure - or outside forces that are causing reference confusion.*+* Pragmatic rules are invoked to decide whether or not the description should be relaxed. These rules will not be discussed here. C. Perform the Relaxation of the DescriDtion If&el$xation is voted for, then the system must . potential referent candidates, (2) &;min;ndwhich features to relax and in what use that to order the potential candidates with ar,edSPect to the preferr;te ordering of features, (3) determine pro g relaxation techniques to use and apply them to 1 description. *The knowledge base contains linguistic descriptions and a description of the listener's visual scene itself. KL-One [41, Here it is represented in a system for describing inheritance taxonomies. **This search ' mechanism [9, 14, l'i?f. constrained by a focus ***For example, of the the pro;.l;m yh5 be with the fz;; conversation speaker's listener's perspectives on it.; it ma I be due to incorrect attachment of a modifier; i may be due to the action requested; and so on. 1. Find potential referent candidates Before relaxation can take place, potential candidates for referents (which denote elements in the listener's visual scene) must first be found. These candidates are discovered by perforF;ngthg ,,walk,, 8 eneral in the knowledyhe base taxonomy vicinity of speaker's classified escription. A scoring KL-One partial matcher is used to determine how close candidate descriptions found during the walk are to the speaker's description. The partial matcher generates a score to represent how well the descriptions match (it also generates scores at the feature level to help determine how the features are to be aligned and how well they match). The best of the descriptions returned by the matcher are selected as referent candidates. 2. Order the features and candidates for relaxation At this point the ref;rdnceth;ystem inspects t.;'$ speaker's description candidates decides which features to relax and in what order." Once the feature order is created, it determines the order in which to try relaxing the candidates. Various knowledge sources are consulted to determine the relaxation orderin . a These include the perceptual and linguistic nowledge sources that were described above, as well as others not discussed here. The suggestions from the knowledge sources are then integrated. This integration requires evaluating the partial orderings imposed by each knowledge source. For example, perceptual knowledge says to relax color. However, if the color value was asserted in a relative clause, lin uistic 8 knowledge would rank color lower. This lea s to a conflict. Thus the relaxation of some other feature may win ou c over color should it cause less conflict. Thus the feature ordering can be used to order candidates: choose first those candidates that best follow the feature order when determining changes that must be made to the speaker's description. The control structure to enforce this rule examines each candidate and assigns a higher priority to those candidates that exhibit a feat;;iceranked higher in the order of features. the candidates with the least important features, slip to the back of'the queue. Once a potential candidate is selected by the controller, the relaxation mechanism begins step 3 of relaxation; it tries to find proper relaxation u~?n;;l; to relax the *features *that have just been (success finding such "justifies" relaxing tlhne description). methods 3. Determine which relaxation methods to apply Relaxation can take place with many aspects of a speaker's description: with the focus of attention in the Real World where one attempts to find a match, with complex relations specified in the description, and with individual features of a referent specified by the description. Often the objects in focus in the Real World implicitly cause other objects to be in focus [9, 181. The subparts of an object in focus, for example, of a bad are reasonable candidates for the refere;; description and should be checked. other times, the speaker might attribute features of a subpart of an object to the whole object i%'yandle describing a plunger that is composed of a a metal rod, a blue cap, and a green cup as "th& green plunger"). relaxation mechanism In tthJese cases, the follows part-whole *Of course, once a particular candidate is then deciding which features to relax is - one simply compares featuret;z tar et) the candidate description pat em>. f and the speaker's description t the 137 relation. Complex relations specified in a speaker's description can also be relaxed. These relations include spatial relations (e.g., . "the the &D of the tube,,) outlet near {;~~z$y.tube") and super!.a,c,p,m,"r?,?~e~ ,,(t%eg~on~,"% Finally, as size or thE,f;i- le;h;;at;;zs of an ob'ec;n(s;;; P specific a speaker's description are open to relaxation. Relaxation of a description has a few global strategies ,,","t;,,oan be followed: errorful value from the (1) dro.p the alto ether description but feep its (2) weaken or tighten the feature value new value close to the specified one, or (3) try some other feature value. set The realization of these strategies is through a of procedures (or relaxakayh methods) that are organized hierarchically. f rocedure is an expert at relaxing its particular For example, ype of feature. the Generate-Similar-Feature-Values !i rocedure is composed of procedures like Generate- imilar-Shape-Values and Generate-Similar-Size- Values. Each of those procedures are further divided into specialists that first attempt to relax the feature value to one "near" the current one (e.g one would prefer to first relax the color +ehf, to Ipink" before relaxing it to "blue") and then, if that fails, to try relaxing it to any of the other ossible values. feature could i.i If those fail, the e dropped out of consideration. CONCLUSIONS Natural language interactions in the Real World invite contextually poor descriptions. This paper sketches the ideas behind an on-going effort to develoe$h;b[;ference Qdheunm~n~cation mechandfsm that can more tolerance such descriptions. My goal is to build a more robust system that can handle errorful descriptions when lolo~1,l~ for a referent, descrip io%.st:tFerring and that is adaptable to & My work tackles the use of to the Real World and the repair of problems in those descriptions. The work attempts to provide a computational scheme for handling noun phrases (following the work on noun phrases b robust enough to provi e 1 C9, 18, 14, 171) that is human-like performance. When people are asked to identif in a certain way: find they go about it re-try, and, can i)dates, adjust as cr necessary, ask for help. *if necessary, give up and integral I ;;T;rn that part of relz;;tion is an process that the particular parameters of relaxation differ from task to task and person to person. r‘lf work provides a forum for trying out the di ferent parameters. ACKNOWLEDGEMENTS I want to especiallaynihank Candy Sidner.for her insightful comments course of this work. suggestions during the I'd also like to acknowledge the helpful comments of Chip Bruce Jeff Gibbons Diane Litman, Jim Schmolze, Marc $ilain and Da& ytpzhiy t ortions of this paper. Man thanks also ohen, Scott Fertig and Kat y Tl Starr for providing me with their water pump dialogues and for their invaluable observations on them. REFERENCES Cl] Allen, James F: A Pl..nDBa;;d ADDroach &Q SDeeCh && Recognition. Toronto, 1979. . . ., University of [2] Allen, James F. J. Litman. ARGOT: Alan M. Frisch, and Diane the Rochester Dialogue System. Proceedin s of AAAI-82 Pittsburgh, Pa., August, 1882, pp. 66-76. [31 Appelt, Douglas E. Planning Natutai LjanGpae Utterances & Satisfv 9?- ~ ’ - l ” Stanford University, 19 1. [4] Bra&man Ronald J. Renresentina knowledge. A Structural Paradigm for University, 1977. Ph.D. Th., Harvard [5] Brown, John Seely and Kurt VanLehn. "Repair Theory: Skills.,, A Generative Theory of Bu s in Procedural Cognitive Science 4, 4 ($980), 379-426. C63 Cohen, Philip R. B ;no;ihnq What to.Sav: Planning SDeeCh Acts. . . ., University of Toronto, 1978. E 71 Cohen Philip R. dentification as a Planned Action. The need for Referent. IJCAI-81, Vancouver, B.C., Canada, August, PP* 31-35. Proceeyi..(iy of Ii 81 Goodman eference. Bradley A. Miscommunication and kNRL Group Working Paper, Bolt Beranek and Newman Inc., January 1983. [9] Grosz, Barbara J. The ReDresent$y;n $;d Use nf Focus in Dialogue Understanding. Unixty of California, Berkeley, 197$'.' *' Cl01 Lipkis, Thomas. A KL-ONE Classifier. Proceedings of the 1981 KL-One Workshop, June, 1982, pp. 128-145. b;:?in;itman Diane. Discourse and Problem . KNRL Group Working Paper, Bolt Beranek and Newman Inc., August 1982. [12] McDonald David D. and E. Jeffery Conklin. Salience as a !?implifying Metaphor for Natural Language Generation. Proceedings of AAAI-82, Pittsburgh, Pa., August, 1982, pp. 75-78. [ 133 Perrault, C. Raymond and Philip R. Cohen. It's for your own good: a note on inaccurate reference. In Elements & Discourse Understanding, $;;i;, Webber and Sags, Ed.,Cambridge University , 1981, pp. 217-230. [14] Reiciman, Rachel. gConversational &&et--7cy. Cognitive Science 2, 4 (1978), - . [15] Reichman, Rachel. Plain SDeaking: A Theorv and Gram ar of S ontaneou-course. Harvard !niv?%w Ph.D. Th., Cl63 Ringle, Martin an&B;a;raEdBruce. Conversation Failure. wl ge ReDresentation and Natural L n ua e Processing W. Lehnart and --E?$%$%nce Erlbaum'Associates, M. Ringle, 1981. [ 173 Sidner, Candace Lee. Towards a Computational Theorv of Definite AnpahD~r$hComDrehension in English-iscourse. Institute of Technology,'l979I Massachusetts 5:",~,,W;~b;;~ ;3n;ie L nn. A Formal ADDrO ach to ml?- D r . P$.D. TE., Harvard Univerxty, f19JitBen,pp; . 4e; W",;,r;, . . . ,+!fwY-=' and Thought, 138
1983
31
224
PHONOTACTIC AND LEXICAL CONSTRAINTS IN SPEECIi RECOGNITION Daniel P. Huttenlocher and Victor W. Zue Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Cambridge, Massachusetts 02139 ABSTRACT We demonstrate a method for partitioning a large lexicon into small equivalence classes, based on sequential phonetic and prosodic constraints. The representation is attractive for speech recognition systems because it allows all but a small number of word candidates to be excluded, using only gross phonetic and prosodic information. The approach is a robust one in that the representation is relatively insensitive to phonetic variability and recognition error. INTRODUCTION Speech is the output of a highly constrained system. While it has long been recognized that there are multiple sources of constraint on speech production and recognition, natural language research has tended to focus on the syntactic, semantic, and discourse levels of processing. We believe that constraints at the phonological and lexical levels, although less well understood, are as important in recognition as higher level constraints. For a given language, the speech signal is produced with a limited inventory of possible sounds, and these sounds can only be combined in certain ways to form meaningful words. Knowledge about such constraints is implicitly possessed by native speakers of a given language. For example, an English speaker knows that “vnuk” is not an English word because it violates the phonotactic rules governing the allowable sound sequences of the language. He or she also knows that if an English word starts with three consonants, then the first consonant must be an Is/, and the second consonant must be either /p/, It/, or /k/. On the other hand “smeck” is a permissible sequence of sounds in Engiish, but is not a word because it is not in the lexicon. Such phonotactic and lexical knowledge is presumably important in speech recognition, particularly when the acoustic cues to a speech sound are missing or distorted. Perceptual data demonstrate the importance of these lower level phonological and lexical constraints. First, people are good at recognizing isolated words, where there are no higher-level syntactic or semantic constraints [a]. Second. trained phoneticians are rather poor at phonetically transcribtng speech from an unknown language, for which they do not possess the phonotactic and lexical knowledge [ll]. l Research supported by the Office of Naval Research under contract N00014-82-K-0727 arId by the System Development Foundation Perceptual data dcmonstra~c that phonotactic and lexical knowledge is useful in speech recognition, we are concerned with how such knowledge can be used to constrain the recognition task. In this paper we investigate some phonotactic and lexical constraints by exammmg certain properties of large lexicons. First we consider the effects of representing words in terms of broad phonetic classes rather than specific phones. Then we discuss how this representation handles some common problems in speech recognition such as acoustic variability, and segment deletion. PHONOTACTIC CONSTRAINTS CAN BE EXTREMELY USEFUL IN LEXICAL ACCESS Most of the phonological rules informally gathered by linguists and speech researchers are specified in terms of broad phonetic classes rather than specific phones. For example, the hornorganic rule of nasal-stop clusters specifies that nasals and stop consonants must be produced at the same place of articulation. Thus we have words like “limp” or “can’t”, but not “limt” or “canp”. In speech perception, there is also evidence that people use knowledge about the broad classifications of speech sounds. For example, the non-word “shpeech” is still recognizable as the word “speech”, while “tpeech” is not. This is because “s” and “sh” both belong to the same class of sounds (the strong fricatives), while “t” belongs to a ditferent class (the aspirated stops). The perceptual similarity of these broad phonetic classes has long been known [9]. These broad classes are based on the so called manner of articulation differences. For example, the stop consonants /p/, It/, and /k/ are all produced in the same manner, with closure, release and aspiration. The stops differ from one another in their respective place of articulation, or the shape of the vocal tract and position of the articulators. Manner differences tend to have more robust and speaker-invariant acoustic cues than place differences [8]. This makes broad manner classes attractive for recognition systems. However, until quite recently little was known about the role these constraints play in recognition [l] [14]. Therefore, speech recognition and understanding systems have not made much use of this information [4] [fj]. Although the importance of phonotactic constraints has long been known, the magnitude of their predictive power was not apparent until Shipman and Zue reported a set of studies recently [lo]. These studies examined the phonotactic constraints of American English frorn the phonetic distributions in the 20,000-word Merriam Webster’s Pocket Cictionary. In one 172 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. study the phones of each word were mapped into one of six broad phonetic categories: vowels, stops. nasals, liquids and glides, strong fricatives, and weak frrcatrves. Thus, for example, the word “speak”, with a phonetic string given by /spik/, is represented as the pattern: [strong-fricative][stop][vowel][stop] It was found that, even at this broad phonetic level, approximately l/3 of the words in the 20,000-word lexicon can be uniquely specified. One can view the broad phonetic classifications as partitioning the lexicon into equivalence classes of words sharing the same phonetic class pattern. For example, the words “speak” and “steep” are in the same equivalence class. The average size of these equivalence classes for the 20,000-word lexicon was found to be approximately 2, and the maximum size was approximately 200. In other words, in the worst case, a broad phonetic representation of the words in a large lexicon reduces the number of possible word candidates to about 1% of the lexicon. Furthermore, over half of the lexical items belong to equivalence classes of size 5 or less. This distribution was found to be fairly stable for lexicons of about 2,000 or more words, for smaller lexicons the specific choice of words can make a large difference in the distribution. HOW ROBUST IS A BROAD PHONETIC REPRESENTATION? The above results demonstrate that broad phonetic classifications of words can, in principle, reduce the number of word candidates significantly. However, the acoustic realization of a phone can be highly variable, and this variability introduces a good deal of recognition ambiguity in the initial classification of the speech signal [6] [7] [12]. At one extreme, the acoustic characteristics of a phoneme can undergo simple modifications as a consequence of contextual and inter-speaker differences. Figure 1 illustrates the differences in the acoustic signal for the various allophones of /t/ in the words “tree”, “tea”, “city”, and “beaten”. At the other extreme, contextual effects can also produce severe modifications in which phonemes or syllables are deleted altogether. Thus, for example, the word “international” can have many different realizations, some of which are illustrated in Figure 2. Not only may phonemes be deleted, some pronunciations of a word may even have a different number of syllables than the clearly enunciated version. In order to evaluate the viability of a broad phonetic class representation for speech recognition systems, two major problems must first be considered. The first problem is that of mislabeling a phonetic segment, and the second problem is the deletion of a segment altogether. It is important to note that these phenomena can occur as a consequence of the high level of variability in natural speech, as welt as resulting from an error by the speech recognition system. That is, not only can the recognizer make a mistake. a given speaker can utter a word with changed or deleted segments. Therefore, even a perfect recognizer would still have “errors” in its input. We address segmental variation and segmental deletion errors in the next two sections. The scheme proposed by Shipmnn and Zue can handle allophonic variations, such as the different realizations of it/. tree tea city beaten steep Figure 1: Spectrograms fllustr-ating the Acotistic Realizations of the Various Allophones of It/ int-na-tion-al in-ter-na-tion-al in-ner-nash-nal in-ner-na-tion-al Figure 2: Spectrograms Illustrating Several Possible Pronunciations for the Word “International” 173 This is because contextual variations terld to affect the detailed acoustic realizations of the phonetic segmtirlts, as opposecl to the gross manner features used in the broad classes. When accessing the lexicon based on broad phonetic classification, detailed allophonic differences are completely disregarded. On the other hand, some uncertainties due to inter-speaker differences and recognizer errors are bound to occur. Given such uncertainties, one may ask whether the original results of Shiptnan and Zue still hold for lexical access. There are a number of ways such a question can be answered. In one study we inferred the effect of these labeling ambiguities by allowing a fixed percentage of the phonetic segments in the lexicon to be unclassified while assuming that the remaining segments are classified correctly. Thus, for example, a 10% phonetic uncertainty is simulated by assuming that 90% percent of the phonetic scgtnents in the lexicon are classified correctly. The remaining 10% of the segments can PROSODIC INFORMATION CAN ALSO AID LEXICAL ACCESS The broad phonetic class representation cannot handle segment or syllable deletions, since when a segment deletion occurs, the broad phonetic class sequence is affected. Traditionally, this problem is solved by expanding the lexicon via phonolog~c:al rules. in orci!?r to inc!ur.l .e all possible pronunciations of each word 1131. We frnd this alternative unattractive for several reasons. For example, dictionary expansion does not capture the nature of phonetic variability. Once a given word is represented as a set of alternate pronunciations, the fact that certain segments of a word are highly variable while others are relatively invariant is completely lost. In fact, below we see that the less variable segments of a word provide more lexical constraint than those segments which are highly variable. Another problem with lexical expansion is that of assigning likelihood measures to each pronunciation. Finally, storing all alternate pronunciations is computationally expensive, since the size of the lexicon can increase substantially. effectively be matched to any of the six phonetic categories. In order to accommodate such ambiguities, words in the lexicon must now contain not only the correct broad phonetic representation, but also those representations resulting from including unclassified segments. Admittedly our assumptions are not completely realistic, since labeling uncertainties do not occur Some segments of a word are highly variable, while others are more or less invariant. Depending on the extent to which the for only a fixed percentage of the segments. Furthermore, variable segments constrain lexical access, it might be possible labeling uncertainties usually arise among subsets of the broad categories. For example, it may be possible to confuse a strong to represent words only in terms of their less variable parts. For instance, in American English most of the phonological rules fricative with a weak one, but not a strong fricative with a vowel. apply to unstressed syllables. In other words, phonetic segments Nevertheless, we believe that such a simulation provides a around unstressed syllables are more variable than those around glimpse of the effect of labeling uncertainties. stressed syllables. Perceptual results have also shown that the Table 1 compares the lexical distributions obtained from the original results of Shipman and Zue (in the first column) with those obtained by allowing 10% and 20% labeling uncertainty (in the second and third columns). The results indicate that, even allowing for a good deal of classification ambiguity, lexical constraints imposed by sequences of broad phonetic classes are still extremely powerful. In all cases, over 30% of the lexical items can be uniquely specified, and over 50% of the time the size of the equivalence class is 5 or less. On the other hand, the maxitnum sizes of the equivalence classes grow steadily as the amount of labeling uncertainty increases. Whole 10% 20% Word Label. Label . Errors Errors % Uniquely Specified 32% 32% 32% % In Classes of Size 5 or Less 56% 56% 55% acoustic cues for phonetic segtnents around unstressed syllables are usually far less reliable than around stressed syllables [2]. Thus, one may ask to what extent phones in unstressed syllables are necessary for speech recognition. In an attempt to answer this question, we compared the relative lexical constraint of phones in stressed versus unstressed syllables. In one experirnent, we classified the words in the 20,000-word Webster’s Pocket Dictionary either according to only the phones in stressed syllables, or according to only the phones in unstressed syllables. In the first condition, the phones in stressed syllables were mapped into their corresponding phonetic classes while the entire unstressed syllables were mapped into a “placeholder” symbol. In the second condition the opposite was done. For example, in the first condition the word “paper”, with the phonemic string I’$-~$1, is represented by the pattern: [stop][vowel][ *] where * is the unstressed syllable marker. In the second condition the word is represented by the pattern: [ *][stop][vowel] Max Class Size 210 278 346 Table 1: Comparison of Lexical Constraint with and without Labeling Uncertainty where * is the stressed syllable marker. Note that at first glance a significant amount of information is lost by mapping an entire syllable into a placeholder symbol. Closer examination reveals, however, that t!ie placeholder symbol retains the prosodic strur,ture of tttc \v’T)!+ cl. A r-:otc~!:rlt:;11orl which maCt?c; this rnc~e explicit combines the par tral phonetic classlflcatron with syllabic stress information. Thus, in the first condition, the word “paper” 174 would be represented as: [stop][vowel] + [S][U] where [S] and [U] correspond syllables, respectively. to stressed and unstressed The results of this experiment are given in the second two columns of Table 2. The results summarized in the table are obtained by explicitly representing the prosodic information as sequences of stressed and unstressed syllables. The results for “wildcarding” the deleted syllables are almost identical and hence are not presented here. The first column of the table gives the results for the whole word (as in [lo]). The second and third columns show the cases where phonetic information is only preserved in the stressed or in the unstressed syllables. It should be noted that the results cannot be accounted for simply on the basis of the number of phones in stressed versus unstressed syllables. For the entire lexicon, there are only approximately 1.5 times as many phones in stressed than in unstressed syllables. In addition, if one considers only polysyllabic words, there are almost equal numbers of phones in stressed and unstressed syllables, yet the lexical distribution remains similar to that in Table 2. These results demonstrate that the phonotactic information in stressed syllables provides much more lexical constraint than that in unstressed syllables. This is particularly interesting in light of the fact that the phones in stressed syllables are much less variable than those in unstressed syllables. Therefore, recognition systems should not be terribly concerned with correctly identifying the phones in unstressed syllables. Not only is the signal highly variable in these segments, making classification difficult; the segments do not constrain recognition as much as the less variable segments. This representation is very robust with respect to segmental and syllabic deletions. Most segment deletions, as was pomted out above, occur in unstressed syllables. Since the phones in unstressed syllables are not included in the representation, their deletion or modification is ignored. Syllabic deletions occur exclusively in unstressed syllables. and usually in SykikJks containing just a single phone. Thus, words with a single-phone unstressed syllable can be stored according to two syllabic stress patterns. For example the word “international” would be encoded by the phones in its stressed syllables: [vowel][nasal][nasal][vowel][strong-fricative] with the two stress patterns [S][U][S][U]U] and [S][U][S][U] for the 5 and 4-syllable versions. The common pronunciations of “lrlternational” (e.g., those in Figure 2) are all encoded by these two representations, while unreasonable pronunciations like “lnterashnel” are excluded. SUMMARY We have demonstrated a method for encoding the words in a large lexrcon according to broad phonetic characterizations. This scheme takes advantage of the fact that even at a broad level of description, the sequential constramts on allowable sound sequences are very strong. It also makes use of the fact Whole Stressed Un- Word Syls. Stressed Syls. % Uniquely Specified % In Classes of Size 5 or Less Average Class Size Max Class Size 32% 17% 8% 56% 38% 19% 2.3 3.8 7.7 210 291 3717 Table 2: Comparison of Lexical Constraint in Stressed vs Unstressed Syllables that the phonetically variable parts of words provide much less lexical constraint than the phonetically invariant parts. The interesting properties of the representation are that it is based on relatively robust phonetic classes, it allows for phonetic variability, and it partitions the lexicon into very small equivalence classes. This makes the representation a?tractive for speech recognition systems [15]. Using a broad phonetic representation of the lexicon is a search avoidance technique, allowing a large lexicon to be pruned to a small set of pot?nlifll word candidates. An essential property of such a technique I? lhat it retains the correct answer in the small candidate set. We have demonstrated that, for ;I wide variety of speech phenomena, a broad phonetic representation has this property. [I] Broad, D.J. and Shoup, J.E. (1975) “Concepts for Acoustic Phonetic Recognition” in R.D. Reddy, Speech Recoynitron: Invited Papers Presented at the 1974 IEEE Symposium. Academic Press, New York. [2] Cutler, A. and Foss, D.J. (1977) “On the Role of Sentence Stress in Sentence Processing”, Language and Speech, Vol. 20, l-10. [3] Dreher, J.J. and O’Neill, J.J. (1957) “Effects of Ambient Noise on Speaker Intelligibility for Words and Phrases”, Journal of the Acoustical Society of America, vol. 29, no. 12. [4] Erman, L.D, Hayes-Roth, F., Lesser, V.R., and Reddy, R.D. (1980). “The Hearsay-II Speech-Understanding System: Integrating Knowledge to Resolve Uncertainty”, Computing Surveys, vol. 12, no. 2, 213-253. [5] Klatt, D.H. (1977) “Review of the ARPA Speech Understanding Project”, Journal of the Acoustical Society of America, vol. 62, no. 6, 1345-1366. 175 [6] Klatt, D.H. (1980) “Speech Perception: A Model of Acoustic Phonetic Analysis and Lexical Access” in R. Cole, Perception and Production of FIuent Speech. Lawrence Erlbaum Assoc., Hillsdale, N.J. [7] Klatt, D.H. (1983) “The Problem of Variability in Speech Recognition and in Models of Perception”. Invited paper at the 10th International Congress of Phonetic Sciences. [8] Lea, W.A. Hall, N.Y. (1980) Trends in Speech Recognition. Prentice- [9] Miller, GA. and Nicely, P.E. (1954) “An Analysis of Perceptual Confusions Among Some English Consonants”, Journal of the Acoustical Society of America, vol. 27, no. 2,338-352. [lo] Shipman, D.W. and Zue, V.W. (1982) “Properties of Large Lexicons. Implications for Advanced Isolated Word Recognition Systems”, Conference Record, IEEE International Conference on Speech Acoustics and Signal Processing, Paris, France, 546-549. [l l] Shockcy. L. and Reddy, R.D. (1974) “Quantitative Analysis of Speech Perception: ReSiJlts from Transcription of Connected . Speech from Unfamiliar Languages” Speech Communication Seminar, G. Fant (Ed). [12] Smith. A. (1977) “Word Iiypothcsizntion for Large-Vocabulat-y Speech Understanding Systems“, Doctoral Dissertation, Carnegie-Mellon University, Department of Computer Science. [13] Woods, W. and Zue, V.W. (1976) “Dictionary Expansion via Phonological Rules for a Speech Understanding System”, Conference Record, IEEE International Conference on Speech Acoustics and Signal Processing. Phila, Pa. 561-564. [14] Zue, V.W. (1981) “Acoustic-Phonetic Knowledge Representation: Implications from Spectrogram Reading Experiments”, Proceedings of the 1981 NATO Advanced Summer Institute on Speech Recognition, Bonas, France. [15] Zue, V.W. and Huttenlocher, D.P. (1983) “Computer Recognition of Isolated Words from Large Vocabularies”. IEEE Computer Society Trends and Applications Conference. Washington, D.C., 121-125. 176
1983
32
225
RESEARCHER: AN OVERVIEW1 Michael Lebowitz Department of Computer Science Computer Science Building, Columbia University New York, NY 10027 Abstract Described in this paper is a computer system, RESEARCHER, beine: develoDed at Columbia that reads natural languagk text-in th(? form of patent abstracts and creates.a permanent long-term memory based on concepts generaliz+ from these texts, forming an intelligent mformatlon s stem. This pa er is intended to give an overview of RESEARCHER & e will describe briefly the four main areas dealt ’ with in the desi n % of RESEARCHER: 1) knowledge representation w ere a canonical scheme for representing physical objects has been developed, generalization 2) memory-based text processing, 3) and generalization-based organization that treats conce part of understanding, a t formation as an E??g% an question answering. 4) generalization-based 1 Introduction Natural language processin P and memory organization are logical components of intel igent information systems. At Columbia developing a computer system RESEAR6Hz& %zt reads natural lan form of patent abstracts (disc drive F uage text in thg pa ents provide the initial domain) and creates a permanent long-term memor based on generalizations that it makes from these texts. i n terms of task RESEARCHER is similar to IPP [Lebowitz 80; Lebowitz remembered news stories. 8% a program that read and e need o deal with complex object representations and descriptions has introduced a whole new range of problems not considered for that program. 2 Representation Theefirst problem to be worked on in any new domain in AI 1s the design of a scheme to represent relevant concepts.. AI researchers have not extensively investi ated representing corn 78; Kosslyn and 8 lex physical ob’ects (although hwartz 771 an d Le II % nert others have a dressed some of the issues we are concerned with). We have develo ed R a frame-based system with the flavor of Ghan ‘s Conce tual Dependency Schank 721, that deals with ob’ects ins ead of actions. P T 6 is scheme 1s described in detai i in [Wasserman and Lebowitz 821. The basic frame-like structure used to represent objects is known +s a memette. Memettes are used as part of a hlerarch!cal set of prototypes (generalized descrtptlons derived from specific instances idiosyncratic Section 4. A j,. described in gene+ object ? iven memette may be describing a fairly e.g., a rotot ical disc drive) or a more speclflc obJect a mo el 19 -3 floppy disc drive). In a i;t somewhat \ simp ified form, the basic structure of a memette is shown in Figure 1. The TYPE slot of a memette indicates whether this is a single indivisible structure (unitary) or a conglomeration (NAME: <name-of-object > TYPE: unitnr or composite STRUCTURE!!: <shape-descri f tar) <a list of rela ion if unitary if composite) records> Figure 1: Representation Schema of two or more field contains ieces (composite). The STRUCTURE eit er % a description of the sha e B of an object, if it is unitar or a set of relation S& e-descriptors recor composite. s, if it is are represe? t ations of o jects % graphical based mostly on visual properties. Relation records generally describe binary physical relations between parts of a complex object. To date we have not explored shape descriptors in great detail. We expect to have a system that uses prototy % ical shapes (in much the same way we use prototypical o ‘ect descriptions), combining declarative and image- ike i representations in much the same wa Shwartz’s model [Kosslyn and Shwartz 1 as Kosslyn and 71. We have studied object relations in much and have developed a canonical scheme for !I reater detail, escribin t the various ways that two objects can relate to each o her. This scheme is used to represent, amono- other things, the meanin of” an cf s of words or phrases such as “above”, “on top “surrounding” relations. that are used to describe physical - -- 2 Figure 2 shows the major elements used in relation re resentation. fie ds f Various combinations of values for the shown in Figure 2 provide wide covera e of the kinds of relations that objects can have with eat I! other. Property distance contact location orien ta tio72 e,nclosure Description Word with property distance between objects near strength of contact relative direction between t,“b”o”vhe’ng ob’ects re ative object orientation r’ descri parallel partia enclosure P tion of full or encircled Figure 2: Canonical Relation Fields Certain combinations of relation fields occur together often enough that relations, like objects and shapes, can frequently be described both in text and our representations, in terms ot DrototvDes. The normal way to-represent an’ ob’ect is in therms 6f rotot pica1 relations t such as ON-TO+-OF and SURRO?JND$$ that are in urn represented canonically with the fields ih Figure 2. As an exam consider E 2 le of how our representation scheme is used 1, taken from an abstract of a US patent about a computer disc drive. ‘This research was supported in part by the Defense Advanced Research ProJects Agency under contract N00039-82-C-0427. 232 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. EX1 - Enclosed Disc Drive having Combination Filter Assembly A combination filter s stem for an enclosed disc drive in which a brea her % central position filter is provided in a in the disc drive cover and a recirculatin air filter is positioned a B concentrically out the breather filter. A possible memette structure for this patent in shown in Figure 3. (NAME. enclosed-disc-drive- STRUCTURE: ((SURROUNDS en~~sluh~~l~_T;lrP~~e~~~posite (NAME: enclosure TYPE: composite STRUCTURE: ((ON-TOP-OF cover CaSe>)) (NAME: case TYPE: unitary STRUCTURE: (box open-on-top)) (NAME: disc-drive TYPE: STRUCTURE: unknown) composite (NAME: cover TYPE: c0m 0site STRUCTURE: ((SURROUND~[centrally (SURROUNDS[centrally 3 (NAME: breather-filter TYPE: unknown) (NAME: recirculating-air-filter TYPE: unknown) Figure 3: Representation for EXl The basic idea here is that we have a set of objects related to each other b be broken down into t l! prototypical relations which can eir canonical corn t onen s in order to make low-level inferences, when nee ed). Note that 2 some of the information shown in Figure 3 is not stated explicitly in EXl. For example, the ccqse is specified as a unitary memette.; since virtually nothing was said about the enclosure, this information was assumed by the reader (from knowledge of prototypical cases). Four prototypical relations with their correspondin fillers are used in this small) memette structure: role ON- 5 OP- OF, SURROUNDS and SURROUNDS[centrally] (twice). These define the relations among the case, cover, and various filters that are described in EXI. Unitary memettes do not contain any relation records under their STRUCTURE nronertv: instead. thev have a single shape-descri tor. the shape-descri “Box bpenlon-to ” ‘was *given as Ii articularly func ional P 1 or of the case. f his is not a as been no strong nee 2 iece of information. As yet there to codify shape-descriptors. 3 Text Processing RESEARCHER begins its processing of a patent by determining from the text a conceptual representation of the kind described in Section 2. In the ultimate version of the program, this process will be strongly integrated with memory search and eneralization. analysis erformed b % The conceptual % % ESEARCHER is based on the memory- ased unders andin techniques desi This processing involves a s ned for IPP. op-down goal o f structures in memory integrated with simple, reco nizing 8 syntactic techniques. bot om-up Natural1 stories, % since patents are quite different from news ‘0th because they describe complex objects and because thev make considerable use o P hysical snecial ” I%?%%RcHER lane-uage, the precise techniques used in are distinct from those used in IPP. RESEARCIIER is still redictive in nature. However since patents are not F ocused on events as are news stories, the action-based conce tual g analyzers, e. . s P redictions of IPP (or other Birnbaum and Selfridge 811) must e extensively modi ied. S@fd~~ER the predictions used for understanding in are based on the h sical descriptions built up, in much the same wa as I lY made redictions from events. The goal of RESEARCHER’s un a erstanding 8 recess is to record in memory how a new object being escribed differs from generalized objects already known (keeping in mmd that these are idiosyncratic), and ultimately to generalize new prototypes. Processing in RESEARCHER concentrates on words that refer to physical objects in memory and words that describe physical relations between such objects. Such words are known as Memory Pointers (MPs) and Relation Words . (RWs). These words guide RESEARCHER’s g rocessmg m ottom-up. making use of any information gathered Conce tual analysis in this domain mvolves careful processin P o identify f memet es, MP phrases (usually noun phrases:;; modifications to memettes, repeated mentions of memettes. RWs are used to create the relations between memettes described in section 2. Particular care in this domain has to be given to phrases of the sort “X relation1 Y relation2 Z”. It is frequently ~;;~s~o,~ell if relation2 relates Z to Y or X. So, m the read/write head above a disc connected to a cable” it is ‘not ap arent whether the disc or IT from the surface structure the cable. he read/write head is connected to Prepositional phrase attachment is a well- known problem, and is especial1 domain. We have discovered tha f crucial in the patent a set of heuristics that maintains a single memette in focus based on the memettes and relations involved will solve most of these problems (although in the lon run we ex addition, a model of the device B eing descri E ect to use, in ed). Figure 4 shows processing of the the output from RESEARCHER’s initial part of EXI. *(process-patent EXl) Running RESEARCHER at 6:22:03 PM Patent: EXl (A COMBINATION FILTER SYSTEM FOR AN ENCLOSED DISC DRIVE 111 WHICH A BREATHER FILTER IS PROVIDED II A CEBiTRAL POSITIOB IN THE DISC DRIVE COVER ABD A RECIRCULATING AIR FILTER IS CONCEITRICALLY POSITIOHED ABOUT THE BREATHER FILTER *STOP*) Processing: DISC-DRIVEI crest of processing> Figure 4: RESEARCHER Processing EXI In the output trace in Figure 4, we can see how 233 RESEL4RCHER identifies the various objects mentioned in the text as instances of eneral structures described in %!$%%C such L DIS&DRIVE# and ER c:ates new memettes to re resent t e FILTER ) t specific instances of these structures, &ME&O for the “filter system”, for example, and records how these instances differ from the abstract prototypes. The full shown in ? recessing of EXl leads to the representation igure 5. Text Representation: enclosed-disc-drive2 I I -- disk-drivea- enclosure2--------------- II II/ on-top-of \ I\ motorP I disk# I cover# --------> bases / \ spindle# r/w-head8 I surr \ b-filtert ---> r-filterl A list of relations: Figure 6: Similar Disc Drives Sub'ect: 'SY&TEM' Relation: R-PART xk DISC-DRIVEP Ef -%T ’ SYSTBB’ ‘ SYSTEM’ BREATHER-FILTERS DISC-DRIVE* R:PART COVERI 'SYSTEM' R-PART RECIRCULATIBC-FILTERI ElCLOSUBEt R-SUBROUBDS DISC-DRIVEX COVERP R-SUBROUBDS BREATHER-FILTEB# LOCATIOB:CEBTER RECIRCULATIBG-FILTERS R-SUBROUBDS BREATHER-FILTEBI LOCATIOB:CEHTER Clearly the two disc drives in Figure 6 have much in common that can be the source of a new concept derived through generalization -- an enclosed disc drive. 7 shows the concept generalization mudtile. created by RESEARC%%? Figure 5: RESEARCHER Representation of EXI enclosed-disc-drive# / I -- disk-driveX- encloauret------- I I I I / on-top-of \ motor# I dj.sk# I cover# ----------> < > spindleP r/w-head+! The output in Figure 5 indicates that RESEARCHER has identified the im EXl. (Actually, t ortant physical relations mentioned in he relations are among instances of the abstract memettes.) ‘It basically includes all the relations shown earlier in Figure 3, our tar et representation for EXl plus part reh&onships. Natura ly, the complete text gi of EX1 describes many more relations. 4 Generalization In order to store for later uer information about the patents that are read, Rl?SEdkCHER makes use of Generalization-Based Memory. This method, which was develo ed for IPP [Lebowitz 831 and is related to SchanR’s MOPS [&hank 821 involves storin about given items in memory in terms o 5 information prototypes. generalized The idea is to locate the prototype in memory that best describes an exam le, and then store only how the exam le varies from t e i allows redundant in ormation to be store P s rototype. This only once and allows queries to be answered in terms of descriptive prototypes. For Generalization-Based Memory to be effective, it is not adequate to sim ly make use of pre-specified prototypes. It is necessary or the system to create new prototypes P throu h a generalization process. This process involves identi P r ing similar objects and creating new concepts from them usmg a comparison technique of the sort used by IPP [Lebowitz 831, and related to traditional “learning from examples” programs, e.g., [Winston 721). In the disc drive domain,. typical ‘concepts the generalization process might identify as bein !I useful would be floppy disc drives or double side discs. Crucially RESErmCHER must do this without being specifically provided with examples of these cpnce ts. t! lnnE;‘dd, wh;; st;;;gyinstda;;es ffFv;sa stream of mpu ,. it Generalization-Base 8 together in 1ts Memory, and notice similarities. The representations disc drive for two similar, slightly simplified atents, used . to . test the imtial version of ~i~~~e~CI!ER’s generalization module are shown m enclosed-disc-drivel / I -- disk-drivel- enclosurel-------- II II/ on-top-of \ motor# I diskt 1 covert ----------> support-aembert spindleX r/w-heads (enclosed-disc-drive1 and enclosed-disc-drive2 stored as variants of enclosed-disc-drive#) Figure 7: Generalized Enclosed Disc Drive The idea illustrated in Figure 7 is that RESEARCHER finds the % arts abstracts t of two objects that are similar, and em out into a generalized concept. In this example, the two devices contained similar disc drives and enclosures. Each had a cover on top of some other ob’ect. So these similarities form the basis of a genera ized 1 enclosed disc drive. Only the additional parts and relations of each instance need be recorded in memory along with the generalization. Adapting Generalization-Based Memory for use on structural descrintions of the sort described in Section 2 has proved to be a complex and difficult problem, revolving around the assorted relations among the ob’ects in the descriptions. major problems and Here we will only present one o / the solution. suggest the nature of the possible The central genz;alizing structural descriptions is protbhleem in P recess matching two representations (either of wo objects or of an object and a prototype), determinin correspond d i as was pointe 2 what parts and relations out for simpler exam R les in Winston 7’ 1). Clearly, if we wish to determinemu;tt tk: isk mounts in two drives are similar,. the compared with each other. Since, as mentione Y in Section 234 2, the central part of the description of corn lex objects is a sclt of relations,. we must associate the P re ations m one object with those m the other. The matching process here is quite a difficult one. The main problem is that we are dealing with structured objects, and the parts. of very similar- objects may be aggregated differently m various descriptions. So for example, a read/write he.ad might be described as a direct part of a disc drive,, in one ‘read/write assembly lfi atent, but part of a m anot er. This makes the inherent similarity hard to identify. At the moment, we deal with this “level problem” with simnle heuristics that allow only a limited amount of “le<el hopping” during the comparison process (to avoid the need to consider every possible corres ii ondence amon levels) and a bit of combmatoric force. owever, we fee 9 that the ultimate solution lies in more extensive use of Generalization-Based Memory. If a new object can be identified as an instance of a eneralized concept, *with only a few minor differences 6 B w ich will be done with a discrimination-net-based searc of the sort described in [Lebowitz 83]), then the levels of aggregation will be set. In effect, the existing concepts create a canonical, byi dynamic, framework for describing new objects. addition, by using Generalization-Based Memory, we need compare only a small number of pifferences between objects, rather than complex descriptions. uestion Answering The nresentation of information from a complex set of data ‘in order to answer user uestions is an -interesting problem in its own right (as B as been ointed out b many researchers, including [Lehnert 77; %I cKeown 821 . 7 As part of RESEARCHER, we have included a question answering module that concentrates on taking advanta of Generalization-Based Memory to more effect!ve y Fi e convey information to a user. Here we can only provide a flavor of the approach we are taking. RESEARCHER accepts questions in natural langua e format. It uses the same parser used to process texts f o create a conceptual representation of the uestion. This is much the same a preach as taken in BO I! ‘k IS Also in similar fas Dyer 821. ion to BORIS, we f eventua ly expect the question parsing process to identify actual structures in memory, greatly simplifying the answering process. Once RESEARCHER has develo ed representation of a question, it searc R a con;;ph$ai es memor an answer using an approach similar to [Lehnert T 7 . That is, a set of heuristics is used to decide u % on the I! ype of question and what constitutes a reasona le answer. The answer heuristics focus on using generalizations that occur in memory to quickly convey large amqunts of information, and then describing how may differ from the generalizations. di articular mstances e are also lookmg at how generalization-based heuristics might aid. in determining what aspects of very complex representations to try and convey to a questioner. 6 Conclusion The development of RESEARCHER has led to interestin results in a number of areas. Natural language tha & involves corn that can lea % lex physical objects is an excitin - topic, one to many interesting applications. TV e believe that the re resentation scheme described here, the a Ef plication o P ased Memor memory-based parsing and Generalization- answerin as well as generalization-based question of power B wil T all help lead to the successful development ul, robust, dynamic understanding systems. Acknowledgments hluch of the work described here was carrind out b a roup a of Computer Science PhD students, inclu ing d renneth Wasserman, Cecile Paris Tom Ellman and Laila Moussa, Master’s students includin rd$rk undergraduates, including Erik 6 Lerner and Datskovsky. a and Galina Comments by Kathleen cKeown on a draft of this paper were greatly appreciated. References s Birnbaum and Selfridge 811 Birnbaum L. and elfrid e M. Conceptual analysis of natural language In R. C. !?cchank and C. K. Riesbeck Ed Inside Compuier Understandin New Jersey, 1 8 , Lawrence Erlbaum Associates, Hillsdale, 81, pp. 318 - 353. [Dyer 821 D computer mo cl er, M. G. In-depth understanding: A el of integrated processin for narrative comprehension. Technical Report 219, 9( Department of Computer Science, 1982. ale University s Foss:;nsa”pd Ehw+rtz 771 Kosslyn, S. M. and,, A Simulation of Visual Imagery. Co&itide Science 1 (1977), 265 - 295. [Lebowitz 801 Lebowitz, M. Generalization and memory in an integrated understandin Technical Report 186, Yale University 5 system. Computer Science, 1980. PhD Thesis epartment of &&rn~~~;,“,~~eLeb;$t~ M. t“Geyralization from - 40. ogni ive cience 7, 1 (1983), 1 [Lehnert 771 Lehnert W. G. The Process o Answerin . l-m s Jersey, 19 4 Lawrence &lbaum Associates, Hi 7. ar/E:t%\ 781 Lehnert, W.G. Representin physical memor Technical Re Depar ment of Compu P P ort 131 SI- er Scien>e,aiE78. TM cKeown 821 McKeown, K. R. Generatin’ natural anguage text in response to questions about B atabase structure. Ph.D. Thesis, University of Pennsylvania, 1982. A Schank 721 Schank, R. C. “Conceptual Dependency: Psycho6gy 3, 4 (1972), ?32-- 631. theor of natural lan uage understanding.” Cognitzve !r &hank 821 Schank, R. C. Dynamic Memor : A heory o Remindin and Learning in Compu ers and c’ d P People. ambridge niversity Press, New York, 1982. L Wasserman and Lebowitz 82 ebowitz, M. Wasserman K. and Re Colum s”~~%~*l982. % resenting corn 8 1 ia University ex physical ob’ects in epartment of d omputer !l Winston 721 Winston P. H. Learning structural escriptions from examples, In P. H. Wmston Ed The Ps~ Y chology of Computer Vzsion, McGraw-Hill: New York, 19 2. 235
1983
33
226
THEORY RESOLUTION: BUILDING IN NONEQUATIONAL TH-EOR~S Mark E. Stickel Artificial Intelligence Center SRI International, Menlo Park, CA 94025 ABSTRACT Theory resolution constitutes a set of complete proce- dures for building nonequational theories into a resolution theorem-proving program so that axioms of the theory need never be resolved upon. Total theory resolution uses a deci- sion procedure that is capable of determining inconsistency of any set of clauses using predicates in the theory. Partial theory resolution employs a weaker decision procedure that can determine potential inconsistency of a pair of literals. Applications include the building in of both mathematical and special decision procedures, such as for the taxonomic information furnished by a knowledge representation sys- tem. I INTRODUCTION Building theories into derived inference rules so that axioms of t,he theory are never resolved upon has enormous potential for reducing the size of the exponential search space commonly encountered in resolution theorem proving [5,9]. Plotkin’s work on equational theories [15] was concerned with general methods for building in theories that are equational (i.e., theories that can be expressed as a set of either equalities or, by slight extension, equiv- alences of a pair of literals). This building in of equational theories consisted of using special unification algorithms and reducing terms to normal form. This work has been extended substantially, particularly in the area of develop- ment of special unification algorithms for various equational theories [16]. Not all theories that it would be useful to build in are equational. For example, reasoning about orderings and other transitive relations is often necessary, but using ordinary resolution for this is quite inefficient. It is possible This research was supported by the Defense Advanced Research Projects Agency under Contract N00039-80-G 0575 with the Naval Electronic Systems Command. The views and conclusions contained in this document are those of the author and should not be interpreted as repre- sentative of the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the United States government. to derive an infinite number of consequences from a < b and (5 < y) A (y < z) > (z < z) despite the obvious fact that no refutation based on just these two clauses is possible. A solution to this problem is to require that use of the transitivity axiom be restricted to occasions when either there ar: matches for two of its literals (partial theory resolution) or LL complete refutation of the ordering part of the clauses to be refuted can be found (total theory resolution). Another important form of reasoning in artificial intelligence applications addressed by knowledge repro- sentat,ion systems [4] is reasoning about taxonomic infor- mation and property inheritance. One of our goals is to be able take advantage of the efficient reasoning provided by knowledge representation systems in this area by using the knowledge representation system as a taxonomy decision procedure in a larger deduction system. Combining such systems makes sense, since it relieves the general-purpose deduct,ion system of the need to do taxonomic reasoning and, in addition, extends the power of the knowledge rep- resentation system towards greater logical completeness. Other researchers have also cited advantages of integrat- ing knowledge representation systems with more general deductive systems [3,18]. KRYPTON [2] represents an ap- proach to constructing a knowledge representation system composed of two parts: a terminological component (the TBox) and an assertional component (the ABox). For such systems, theory resolution indicates in general how infor- mation can be provided to the ABox by the TBox and how it can be used by the ABox. Building in nonequational theories differs from building in equational theories. Because equational theories are defined by equalities or equivalences, a term or literal can always be replaced by an equal term or equivalent literal. This is not the case for nonequational theories that arc expressed in terms of implication rather than equiv- alence. If we build in a nonequational theory of taxonomic information that includes Man(z) > Person(z), we would expect to be able to infer Person( John) from Man( John), but not G’erson(Mary) from lMun(Mury) (i.e., replace- ment of Man(z) by Person(z) is permitted only in those cases in which A4un occurs unnegated). Nonequational theories may express conditional inconsistency of a pair of literals. For instance, (z < y) A 391 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. (Y < 4 3 (z < 1 P z ex resses the fact that y < z and I(Z < z) are inconsistent only in the presence of 2 < y. Theory resolution is a set of complete procedures for building in nonequational theories using decision proce dures as components of a more general deduction system. Two forms are described. Total theory resolution employs a decision procedure that can determine inconsistency of any set of clauses using predicates of the theory and is quite restricted as to what inferences it will make. Partial theory resolution is less restricted as to what inferences it will make but requires much less of the decision procedure-making it more feasible, for example, to use knowledge representation systems as the decision procedure. We will give definitions and completeness proofs for the ground case of theory resolution and definitions for the general case. Completeness for the general case follows directly from ground case completeness. We consider only clausal resolution here, but these results should be easily extendable to nonclausal resolution [10,13,22]. II TOTAL THEORY RESOLUTION In building in a theory T, we are interested in ascertaining whether a set of clauses S is T-inconsistent (i.e., whether S IJ T is inconsistent). If we have a decision procedure for T that is capable of finding minimally T- inconsistent subsets of clauses from any set of clauses using only predicates in T, then it can be applied to S with all literals having predicates not in T removed to create an inference rule (total theory resolution) that derives clauses containing no occurrrences of predicates that are referred to in T. A theorem justifying such a rule of inference fol- lows. In it, P is the set of predicates in T, S corresponds to S U T above, and T is some subset of Sp. The decision procedure for T must determine T-inconsistency of sets of clauses from Sp - 2’ and Wp. Theorem. Let S be a set of ground clauses and P be a set of predicates. Let Sp be the set of all clauses of S containing only predicate symbols in P. Let SF be the set of all clauses of S containing only predicate symbols not in P. Let W be S - Sp - SF Let Wp be the list of clauses C’i formed by restricting each clause in W to just the predicates in P. Let WF be the list of clauses Di formed by restricting each clause in W to just the predicates not in P. CY = { C; V Di 1 1 5 i 5 n }. Let X be the set of all clauses of the form Di, V - - e V Di,,, where C’i,, . . . , Ci,,, are all the clauses of Wp in a minimally inconsistent set of clauses from Sp and Wp. Then S is inconsistent if and only if &iU X is inconsistent. Proof: If part. This proves the soundness of the rule. Assume 5’~ tJ X is inconsistent. For every element of X, false is a logical consequence of Sp and some Ci,, . . . , Ci, from Wp. Therefore Di, V-e .V Di,,, is a logical consequence of Sp and W (derived, for example, by imitating a ground resolution derivation of false from Sp and Ci,, . . . , Ci, using C;, V Di,, . . . , Ci,,, V Di, instead of Cil,. . . , Ci,). Because SF c S and every element of X is a logical conse- quence of SplJCV and SpUW C S, if S+JX is inconsistent, then so is S. Only if part. This proves the completness of the rule. Assume S is inconsistent. Then either Sp is inconsis- tent, SF is inconsistent, or the inconsistency of S depends at least partially on W. Case 1. Sp is inconsistent. Then false E X and SF tJ X is inconsistent. Case 2. SF is inconsistent. Then SF U X is inconsistent. Case 3. Sp and SF are consistent. Because they have disjoint sets of predicates, 57~ tJ SF is also consistent. Then there is a minimally inconsistent set of clauses Sl, U S’ u W’ such that Sl, 2 Sp, S$ C SF, and 0CW’ C 18 By completeness of A-ordered resolution [17,20,5,9], there exists an A-ordered resolution refutation of this set with predicates in P preceding predicates not in P in the A-ordering. Included in the refutation is a set of clauses X’ containing no predicates in P derived entirely from Sl, lJ iv’. Sk U ,Y’ is clearly inconsistent. When we look at the A-ordered derivations, it is apparent that each element of X’ is of the form Di, V-e * V Di, derived from Sip tJ W’ such that a subset of Sl, tJ { Ci,, . . . , Ci, } is inconsistent where { Ci, VDil } C W’. If this subset is minimally inconsistent then Di, V . -. V Di,,, E X. Otherwise Di, V - -a V Di,,, is still subsumed by (possibly identical to) an element of X. Because X contains each element of X’ or an element that subsumes it, the inconsistency of SF tJ X follows from the inconsistency of S$U X’. a Definition. Let Ci, . . . , C, be nonempty clauses and D1,... , D, be clauses such that each Ci V Di is in S and every predicate in Ci is in theory T and no predicate in Di is in theory T. Let 611,. . . ,blnl,. . . ,b,r,. . . ,gmn, be sub- stitutions such that { Cl~il,. . . , Cl~rnl,. . . , Cmcml, . . . , Gn~mn, > is minimally T-inconsistent. Then DIalI V- - .V Dlcl,,, V .. . V Dmc,l V -- - V Dmcm,,, is a total theory resolvent from S, using theory T. Total theory resolution, plus ordinary resolution or some other semidecision procedure for first-order predi- cate calculus operating on clauses that do not contain predi- cates in T, is complete. Example. Consider a theory of partial ordering ORD consisting of ~(z < z) and (z < y)A(y < z) > (z < z). A set of unit clauses in this theory is inconsistent if and only if it contains a chain of inequalities tr < . . . < t,(n > 1) such that either ti = t, or l(tn < ti) is one of the clauses. Total theory resolvents would include F(b) from (z < b) v F(z) and F(u) V G(c) from (x < b) V F(z), (b < Y) V G(Y), and either c < a or ~(a < c). Thus the types of reasoning that are employable in the decision procedure can be quite different from and more effective (in its domain) than resolution. There are limitations to the use of total theory resolution. The requirement that the decision procedure for the theory be capable of determining inconsistency of any set of clauses using predicates in the theory is quite strict. Reasoning about sets of clauses is probably an unreasonable requirement for such purposes as using a knowledge rep- resentation system as a decision procedure for taxonomic information, since such systems are often weak in han- dling disjunction. This tends to limit total resolution’s ap- plicability to building in mathematical decision procedures t.hat handle disjunction. Incomplete restrictions of total theory resolution could still be usefully employed. For ex- ample, it may be easy for a system to decide inconsistency of sets of single literals (unit clauses), as above for ORD and maybe also in the case of taxonomic reasoning (note that tsxonomic hierarchies can be expressed in the monadic predicate calculus for which there exists a decision proce- dure that could possibly be used in complete or incomplete theory resolution). Total theory resolution could then be used to resolve on only one literal from each clause. Some care must be taken in deciding what theory T to build in so that the decison procedure does not have to decide too much. The theory must be capable of decid- ing sets of clauses that are constructed by using any predi- cates appearing in T. Thus, if we try to use total theory resolution to build in the equality relation with equality substitutivity (i.e., 2 = y 1 (P(e ..z.. .) > I’(. . aye . e)) for each predicate P), the decision procedure will have to decide all of S. There may be a large number of T-inconsistencies that do not result in useful T-resolvents. It would be a worthwhile refinement .to monitor the finding of T- inconsistent sets of clauses to verify that the substitutions applied do not preclude future use of the T-resolvent. This is like applying a purity check in A-ordered resolution. A-ordered resolution slightly resembles total theory resolution. It permits resolution operations only on the atoms of each clause that occur earliest in a fixed or- dering of predicates (the A-ordering). The A-ordering could place predicates of T before all others. A-ordered resolu- tion differs from total theory resolution in that it assumes resolution (or hyperresolution) is to be used as the inference operation. It is thus inflexible, since it does not permit T to be built in except by resolution. Furthermore, total theory resolution creates a resolvent only from inconsistent set of clauses using predicates of T. A-ordered resolution is not so restrictive. There is probably a useful relationship to be dis- covered between total theory resolution and the work on combining decision procedures [14,19]. So far we have dis- cussed only building in a single decision procedure, though the procedure could be repeated as long aa the sets of predi- cates do not overlap. It is likely that we would want an ex- tension of theory resolution that permits the sets of predi- cates to overlap at least in the case of the equality predicate. A difference between total theory resolution and the work on combining decision procedures is that the latter has been concerned primarily with decision procedures that do not have to instantiate their inputs, unlike our requirements for finding substitutions to make a set of clauses inconsistent. III PARTIAL THEORY RESOLUTION Partial theory resolution is a procedure for build- ing in theories that requires a less complex decision proce- dure than total theory resolution; all it needs is a decision procedure that determines for any pair of literals a complete set of substitutions and conditions for the inconsistency of the literals. Partial theory resolution will first be defined and proved complete for ground clauses. We will define a T- resolution operation that resolves on one or more literals from each input clause, like ordinary resolution without a separate factoring operation. We will then extend it to the general case, showing how T-resolvents can be computed by using only one literal at a time from each input clause. There are two types of T-resolvents in partial theory resolution. If some set of literals of a clause is in- consistent with T, those literals can be removed from the clause to form a T-resolvent: Definition. Let A be a nonempty ground clause and let C be a ground clause. Then C is a ground T-resolvent of A V C if and only if T+A. If, with T assumed, a set of literals of one clause is inconsistent with a set of literals of another clause under certain conditions, then T-resolvents can be formed aa the disjunction of the other literals of the clauses and negated conditions for the inconsistency: Definition. Let A and B be nonempty ground clauses and let C and D be ground clauses. Then CV DV E where E is a ground clause is a ground T-resolvent of A V C and B V D if and only if T+AV 43 V E but not TklAi V E for any literal Ai in A or T+lBj V E for any literal Bj in B. E is called the reeidue of matching A and B. The residue E is a negated condition for the in- consistency of A and B because T+A V 1B V E is equiv- alent to T+E > ((AA B) G fake). The restriction that neither T/--1Ai V E nor TklBj V E assures us that all the literals of both of A and B are essential to the possible T-inconsistency of A A B.. T-resolution includes ordinary resolution because it is always the case that T+AV-IBVE if B is 1A. 393 The soundness of ground T-resolution is obvious. T-semantic trees will be used to prove its completeness. Definition. Let S be a set of ground clauses with set of atoms {Al,... ,Ak}. Then a semantic tree for S is a binary tree with height k such that for each node n with depth i (0 5 i < k), n has two child nodes nl and na at level i + 1, the arc to nr is labeled by the literal Ai+l) and t,he arc to n2 is labeled by the literal lAi+l. Each node n in a semantic tree provides a partial (or, in the case of terminal nodes, total) interpretation I,, for the atoms of S, assigning true to each atom A that labels an arc on the path from the root to n and assigning false to each atom A where -A labels an arc on the path from the root to n. Definition. Node n in a semantic tree for S falsifies clause C E S if and only if C is false in interpretation I n. Definition. A T-semantic tree is a semantic tree from which all nodes representing T-inconsistent truth assign- ments are removed. Definition. Node n in a T-semantic tree for S T-falsifies clause C E S if and only if C is false in interpretation 1, taking account of T. Definition. Node n is a failure node in a T-semantic tree for set of ground clauses S if and only if n T-falsifies a clause in S and no ancestor of n is a failure node. Definition. Node n is an inference node in a T-semantic tree for set of ground clauses S if and only if both of n’s child nodes are failure nodes. Note that although in a T-semantic tree each non- terminal node may have either one or two child nodes, an inference node will always have two. If node n has only a single child node n1 and nl T-falsifies a clause (and thus might be a failure node), then n also T-falsifies the clause. Thus, a failure node will never be the single child node of its parent. Theorem. A set of ground clauses S is T-inconsistent if and only if every branch of a semantic tree 7 for S contains a failure node. Either the root node of 7 is a failure node or there is at least one inference node in 7. Theorem. Let S be a T-inconsistent set of ground clauses and let 7 be a T-semantic tree for S. Then if the root node of 7 is not a failure node, there is a T- inconsistent set of ground clauses S’ derivable from S by ground T-resolution such that 7 is a semantic tree for S’ but has fewer failure nodes for S’ than for S. Proof: Since S is T-inconsistent and the root node of 7 - ^ is not a failure node, tIi%e r&.i.i~~~~a~ IZ&‘%ii3’i’nTerence node n in 7. Then n’s child nodes, nl and na, are both failure nodes. Let the clause T-falsified at nl be A V C where nonempty clause A consists of all the literals not already T-falsified at n. Let the clause T-falsified at n2 be B V D where nonempty clause B consists of all the literals not already T-falsified at n. Then n, or an ancestor of n if n is the single child n0d.e of its parent, is a failure node for S’ = S lJ { C V D V E } where E is a clause consisting of the negations of the literals labeling arcs above node n that were used in T-falsifying A and B. Because T+E A TAi 1 7A and T+lEAAi > 1B where Ai and YAi label the arcs to n1 and n2 respectively, T+A V 1B V E and C V D V E is a ground clause T-resolvent of A V C and B V D. 7 contains fewer failure nodes for S’ than for S because n (or an ancestor of n) is a failure node instead of nl and n2. Theorem. Ground clause T-resolution is complete. Proof: Let S be a T-inconsistent set of clauses. Let 7 be a T-semantic tree for S. Then either the root node of 7 is a failure node, in which case the empty clause is derivable by ground T-resolution from the clause T-falsified at the root, or 7 contains an inference node, in which case com- pleteness is assured by induction on the number of failure nodes, applying the previous theorem that uses ground T- resolution to add a clause that makes the inference node (or an ancestor of it) be a failure node. 8 The ground T-resolution operation as defined above shares somewhat an undesirable feature of total theory resolution, i.e., that it demands too much of the decision procedure-in this case, requiring it to determine the possible inconsistency of two sets of literals (i.e., two clauses) instead of just two literals. This is easily remedied. In the definition of ground T-resolvents with two parent clauses, let A be Al Ve. *i/A, and let B be B1 V. . .V B,. Then T+AVlBV E is equivalent to all of T+Al V lB1vE, . . . . and T+A,VlB,VE being true. Therefore, in computing T-resolvents, it is sufficient to determine pos- sible T-inconsistency of single pairs of literals Ai and Bj with negated condition (residue) Eij (i.e., TblAi V 1Bj V Eij) and form T-resolvents of AV C and B V D as C V D V El1 v . . . v Em,,. Eij is a ground T-match of literals Ai and Bj. Note that this multiple pairwise matching is a substitute for a separate factoring operation. We now extend T-matches and T-resolvents to the general (nonground) case. Definition. Let A and B be two literals. Then (E,a) where E is a clause and cr is a substitution is a T-match of A and B if and only if T+Aa V 1Ba V E but not T/--AC V E or T+Ba V E. A T-match of literals A and B specifies a sub- stitution rr and condition 1E that make A and B be T- inconsistent. 394 We will give examples based on two theories: e A taxonomic hierarchy theory TAX, including Man( 2) > Person( 2). o The partial-ordering theory QRD (defined pre- viously). Example. (false, { w t John}) is a TAX-match of 1Llan( John) and +erson(ur). Example. (o < c, Q)) is an ORD-match of a < 6 and 6 < c. But (~(c < 2) v (a < z),@),(l(c < 2) V ~(2 < Y) v (a < Y), 0L - * * are also ORD-matches of o < 6 and 6 < c. The notion of minimal complete sets of T-matches is defined to exclude these additional T-matches. Definition. Let M = { (El, cl), . . . , (E,, a,)} (n 2 0) be a set of T-matches of literals A and B. Then M is a complete set of T-matches of A and B if and only if for every T-match (E,a) of A and B there is some T-match (Ei,ai) E 121 and substitution 0 such that c = a;6 and T+Eie > E. M is a minimal complete set of T-matches of A and B if and only if M, but no proper subset of M, is a complete set of T-matches of A and B. Example. { (a < c, 0)) is a minimal complete set of ORD-matches of a < 6 and 6 < c. ORD-matches of the form (l(c < 2) V (a < x),0), . . . have the property that ORDt-(a < c) > [l(c < 2) V (c < a)], . . ., and are therefore not in the minimal complete set of ORD-matches. As in the ground case, single-parent and double- parent T-resolution operations are defined: Definition. Let A be a literal and AV C be a clause and let (r be a substitution such that Tj-1Aa. Then Ca is a T-resolvent of A V C. Example. Positive(l) is an ORD-resolvent of (z < 1) V Positive(z). hlore than one T-inconsistent literal can be removed from a clause by performing single parent T- resolution repeatedly. Definition. Let A and B be the nonempty clauses Al V .--VA, and B1V... V B,, let AV C and B V D be clauses, and let (Eij, aii) be T-matches of Ai and Bj. Then Cc V Dav Ea is a T-resoluent of AV C and BV D where v is the most general combined substitution of ~~11, . . . , onn and E is El1 i/a.. V E,,. Example. +obot(John) is a TAX-resolvent of AJun( John) and +erson(w) V iRobot( Example. C(u) V D(u) V (u < c) is an ORD-resolvent of (u < u) V C(u) and (V < c) V D(u). Further constraints on what T-resolvents can be inferred may be required for partial theory resolution to be really effective. For example, a < 6 and c < d, which have no terms in common, have ORD-resolvents ‘(6 < c)V(u < d), -46 < c) v (6 < d), -46 < c) V (a < c), l(d < a) V (c < b), etc. If the first of these is actually used in a refutation, there must exist matches 6 < c and ~(a < d) for its literals. It would be preferable to T-resolve these literals with a < 6 and c < d (e.g., T-resolve a < 6 and 6 < c deriving Q < C, T-resolve a < c and c < d deriving a < d, and resolve that with ~(a < d)) instead of directly T-resolving a < 6 and c < d. We would impose the restriction that ORD- resolvents be derived only by resolving on pairs of literals that have a term in common. There are two previous resolution refinements that resemble partial theory resolution: Z-resolution and U- generalized resolution. Dixon’s Z-resolution [6] is essentially partial theory resolution with the restriction that T must consist of a finite deductively closed set of 2-clauses (clauses with length 2). This restriction does not permit inclusion of assertions like lQ(x)VQ(/(z)), l(z < z), or (z < y)A(y < a) > (.r < z), but d oes permit efficient computation of T- resolvents (even allowing the possibility of compiling T to Lisp code and thence to machine code). Harrison and Rubin’s U-generalized resolution [8] is essentially partial theory resolution restricted to sets of clauses that have a unit or input refutation. They apply it to building in the equality relation, developing a procedure similar to Morris’s Eresolution [12]. The restriction to sets of clauses having unit or input refutations eliminates the need for factoring and simplifies the procedure (only a single literal of each parent must be used to create a T-resolvent), but otherwise seriously limits its applicability. No effort was made in the definition of U-generalized resolution to limit T-resolution by using minimal complete sets of T- matches. Partial theory resolution is a procedure with sub- stantial generality and power. Thus, it is not surprising that many specialized reasoning procedures can be viewed as instances of partial theory resolution, perhaps with addi- tional constraints governing which partial theory resolvents can be inferred: Where T consists of the equality axioms, T- resolution operations include paramodulation [23] (e.g., P( 6) V C V D can be inferred from P(u) V C and u = 6 V D) and E-resolution [12] (e.g., la = 6 V C V D can be inferred from P(u) V C and -J’(b) V D). Where T consists of ordering axioms, including axioms that show how ordering is preserved (such aa (z < y) > (P(x) > P(y)) and (z < y) > (z + z < y + z)), T-resolution operations include Manna and Waldinger’s program-synthetic special relation substitution rule (e.g., P(b)VCVD can be inferred from P(a)VC and (u < 6)VD) and relation matching rule [II] (e.g., -~(a < 6)vCvD can be inferred from P(u)VC and +(b)VD), which are extensions of parsmodulation and Eresolution. T-resolution with or- dering axioms is also similar to Slagle and Norton’s reason- ing about partial ordering [21]. Bledsoe and Hines’s variable elimination (11 is a very refined method for reasoning about inequalities that can be viewed partly as partial theory resolution for inequality with added constraints on partial theory resolution operations. The ORD-resolvent a < c of a < 6 and 6 < c is a variable-elimination-procedure chain resolvent only if 6 is a shielding term (nonground term headed by an uninterpreted function symbol). The variable elimination rule allows inferring ORD-resolvent (u < b)vC from clause (a < z) V (z < 6) V C only if z does not occur in a, 6, or C. The variable elimination rule more generally allows replacement of multiple literals ui < x and x < 6j in a clause by literals ai < bj. This result is obtainable by partial theory resolution if we include the axiom ~(z < UZ~~I(X, y)) and a rule to transform min(ai,, ai,) < bi to (ail < bi) V (ain < 6j). IV CONCLUSION Theory resolution is a set of complete proce- dures for incorporating decision procedures into resolution theorem proving in first-order predicate calculus. Theory resolution can greatly decrease the length of refutations and the size of the search space, for example, by hiding lengthy taxonomic derivations in single TAX-matches and by restricting use of ordering axioms in ORD-matches. Total theory resolution can be used when there exists a decision procedure for the theory that is capable of deter- mining inconsistency of any set of clauses using predicates of the theory. This may be a realistic requirement in some mathemat,ical theorem proving. For example, a decision procedure for Presburger arithmetic (integer addition and inequality) might be adapted to meet the requirements for total theory resolution. Partial theory resolution requires much less of the decision procedure. It requires only that conditions and substitutions for inconsistency of a single pair of literals be determinable by the decision procedure for the theory. This makes it feasible, for example, to consider use of a knowledge representation system as the decision procedure for taxonomic information. Partial theory resolution is also a generalization of several other approaches to building in nonequational theories. We are implementing and testing forms of theory resolution in the deduction system component of the KLAUS natural-language-understanding system [7,22]. ACKNOWLEDGMENTS The author would like to thank Richard Fikes, Mabry Tyson, and Richard Waldinger for their helpful com- ments on an earlier draft of this paper. PI PI PI 14 PI PI PI 181 PI PO1 Dll WI P31 PI WI REFERENCES Bledsoe, W.W. and L.M. Hines. Variable elimina- tion and chaining in a resolution-based prover for in- equalities. Proc. 5th Con/ on Automated Deduction, Les Arcs, France, July 1980, 70-87. Brachman, R.J., R.E. Fikes, and H.J. Levesque. KRYPTON: a functional approach to knowledge rep- resentation. To appear in IEEE Computer 16, 9 (September 1983). Brachman, R. J. and H.J. Levesque. Competence in knowledge representation. Proc. AAA I-82 Nat. Conf. on Artificial Intelligence, Pittsburgh, Pennsylvania, August 1982, 189-192. Brachman, R.J. and B.C. Smith (eds). Special Issue on Knowledge Representation. SIGART Newsletter 70 (February 1980). C’hang, C.L. and R.C.T. Lee. Symbolic Logic and Mechanical Theorem Proving. Academic Press, New York, New York, 1973. Dixon, J.K. Z-resolution: theorem-proving with com- piled axioms. J. ACM 20, 1 (January 1973), 127-147. Haas, N. and G.G. Hendrix. An approach to acquir- ing and applying knowledge. Proc. AAAI-80 Nat. Conf. on Artificial Intelligence, Stanford, California, August 1980, 235-239. Harrison, M.C. and N. Rubin. Another generalization of resolution. J. ACM 25, 3 (July 1978) 341-351. Loveland, D.W. Automated Theorem Proving: A Logical Basis. North-Holland, Amsterdam, The Net herlands, 1978. Manna, Z. and R. Waldinger. A deductive ap- proach to program synthesis. ACM Trunaactions on Programming Languages and Systems 2, 1 (January 1980), 90-121. hfanns, Z. and R. Waldinger. Special relations in program-synthetic deduction (a summary). To ap- pear in J. ACM. Morris, J.B. E-resolution: extension of resolution to include the equality relation. Proc. Int. Joint Conf. on Artificial Intelligence, Washington, D.C., hlay 1969, 287-294. hfurray, N.V. Completely non-clausal theorem prov- ing. Artificial Intelligence 18, 1 (January 1982), 67- 85. Nelson, G. and D.C. Oppen. Simplification by cooperating decision procedures. ACM Trans. Program. Lang. Sys(. 1, 2 (October 1979), 245-257. Plotkin, G-D. Building-in equational theories. In Meitzer, B. and Michie, D. (eds.), Machine Intelligence 7, Halsted Press, 1972, pp, 73-90. 396 I161 PI P81 PI PO1 PI PI I231 Raulefs, P., J. Siekmann, P. Szabo, and E. Unvericht. A short survey on the state of the art in matching and unification problems. SIGSAM Bulletin 13, 2 (h4ay 1979), 14-20. Reynolds, J. Unpublished seminar notes. Stanford University, Palo Alto, California, Fall 1965. Rich, C. Knowledge representation languages and predicate calculus: how to have your cake and eat it too. Proc. AAAI-82 Nat. Conf. on Artificial Intelligence, Pittsburgh, Pennsylvania, August 1982, 193-196. Shostak, R.E. Deciding combinations of theories. Proc. Sixth Conf. on Automated Deduction, New York, New York, June 1982, 209-222. Slagle, J.R. Automatic theorem proving with renam- able and semantic resolution. J. ACM 14, 4 (October 1967), 687-697. Slagle, J.R. and L.M. Norton. Experiments with an automatic theorem-prover having partial ordering in- ference rules. Communicationa of the ACM 16, II (November 1973), 682-688. Stickel, M.E. A nonclausal connection-graph resolu- tion theorem-proving program. Proc. AAAI-82 Nat. Conf. on Artificial Intelligence, Pittsburgh, Pennsylvania, August 1982, 229-233. Wos, L. and G.A. Robinson. Paramodulation and set of support. Proc. SYmP* on Automatic Demonstration, Versailles, France, 1968, Lecture Notes in Mathematice 125, Springer-Verlag, Berlin, West Germany (1970), 276-310. 397
1983
34
227
Analyzing tlic Roles of Descriptions and Actions in Open Systems Carl Hewitt Peter de Jong The Artificial Intelligence Laboratory Massachusetts l[nstitute of Technology Cambridge, Massachusetts 02139 Abstract This paper analyzes relationships between the roles of descriptions and actions in large scale, open ended, geographically distributed, concurrent systems. Rather than atlempt to deal with the complexities and ambiguities of currently implemented descriptive languages, we concentrate our analysis on what can be expressed in the underlying frameworks such as the lambda calculus and first order logic. By this means we conclude that descriptions and actions complement one another; neither being sufficient unto itself. This paper provides a basis to begin the analysis of the very subtle relationships that hold between descriptions and actions in Open Systems. 1. Open Systems Problems and opportunities associated with describing and taking action in “open systems” will be increasingly recognized as a central line of computer system development. In the future many computer applications will be based on communication bctwcen systems which will have been dcvelopcd separately and indcpcndcntly. These systems arc open-ended and incremental-- undergoing continual evolution. The only thing that the components of an open syslcm hold in common is the ability to communicate with each other. In this paper we study description and actions in Open Systems from the viewpoint of Message Passing Semantics. a research programme to explore issues in the semantics of communication in parallel systems. In an open system it becomes very difficult to determine what objects exist at any point in time. For example a query might never finish looking for possible answers. If a system is asked to find all the current mail address of people who have graduated from MIT since 1930, it might have a hard time answering. It can give all the mail address it has found so far, but there is no guarantee that another one can’t, be found by more diligent search. These examples illustrate how the “closed world assumption” is intrinsically contrary to the nature of Open Systems. We understand the “closed world assmi7ption” to be that the informalion about the world being modeled is complete *This paper describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Major support for this work was provided by the System Development Foundation. in the sense that all and UII!IJ the relationships that can possibly hold among objects are those implied by the local information at hand (cf. [Reiter 821). Systems based on the “closed world assumption” typically assume that they can find all the instances of a concept that exist by searching their local storage. 111 contrast we desire that systems be accountable for having evidence for their beliefs and be explicitly aware of the limits of their knowledge. At first glance it might seem that the closed world assumption, almost universal in the AI. and database literature, is smart because it provides a ready default answer for any query. Unfortunately the default answers providecl become less realistic as the Open System increases in size. 2. Message Passing Semantics 2.1. Descriptions of Behavior Message Passing Semantics builds on Actor ‘I’heoly as a foundation. Actor Theory describes important aspects of parallelism and serialization from the view point of message pauing ohjjcctc. It goes beyond the sequential coroutine message passing devcioped in systems like Simula and SmallTalk. An actor svstem is composed of abstract objects called acton. Actors we defined bv their behavior when they accept communications. When a communication is accepted an actor can perform the following kinds of actions concurrently: make sirnple decisions (such as whether some actor it received in the communication is the same as one of its acquaintances). create new actors, transmit more communications to its own acquaintances as well as the acquaintances of the communication accepted, and change its behavior for the next message accepted (i.e. change its local state) subject to the constraints of certain laws [Hewitt, Baker 771. Below we describe the behavior of a simple actor which is a shared checking account which we will call ACCOUNT43. One kind of description might be a partiul description of what happened when the actor ACCOUNT43 with a balance of $10 accepted a request to make a deposit with amount $2 for customer CZ, and as a result created the actor $12, sent a CornpIe t ion report to c2, and became an account with balance $12: From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. ACCOUNT43 I (an Account (with balance $10) (with behavior (2 Behavior ( . . . . . . . . I *ACCEPTED* . ” (a Request (with message (2 Deposit (with amount $2))) (with customer ~2)) *BECAME* *REPLIED-TO* c2+(a Completion) +G$ . . . . Figure 2-1: A Happening Later in the paper we will describe how the behavior of an actor is written in the language Act2. 2.2. Shared Resources The modeling of shared resources is fimdamental to Open Systems. Actor systems aim to provide a clean way to implcmcnt elfccts (not “side-effects” a pcjorntive term that has been used as a kind of curse by proponents of purely applicative programming in the Iambda calculus). By an efJ&f we mean a local state ch:tnge in a shared actor which causes a change in behavior that is visible to other actors. For e~rmple sending a deposit to an account sh,tred by muhiple users should have the effect of increasing the balance in the account. ACCOUNT43 is an example of the kind of shared resource which is important in the conceptual modeling of Open Systems. Such shared resources should be suitable for use by a growing collection of users. To a given user, a shared account will exhibit indeterminacy in the balance depending on the deposits and withdrawals made by other users. The indeterminacy arises from the indeterminacy in the arrival order of messages at the shared account. A common bug in attempting to model computation in open systems is to <assume that an agent sending messages can determine the order in which they will be seen by the recipient. It is important to realize that the above assumption is contrary to the nature of Open Systems. Also implementing shared resources inherently involves being open to outside communications, even those put forth by parties which joined the Open System after the request being considered was received. 2.3. Multiple-Inheritance A description, such and (2 SharedAccount), can inherit attributes and behavior from other descriptions. In this example, a shared account inherits from bolh the description of an Account unJthe description of a Possess ion. (an Account (with balance (an Amount))) (a SharedAccount) Figure 2-2: h/lultiple Inheritance Dealing with the issues raised by the possibility of being a specialization of more than one description has become known as the “Multiple Inheritance Problem”. A number of approaches have been developed in the last few years including the following: [Weinreb, Moon 811, [Curry, Baer. Lipkie, Lee 821, [t3orning, Ingails 821, [Bobrow, Stcfik 821 and [Borgida, Mylopoulos, Wong 821. Our approach dirfers in that it builds on the theory of an underlying description system [Attardi, Simi 811. This theory axiomatizes the relationship between the multiple inherited descriptions. This theory also maintains the parallelism inherent in actor theory, and thus is suitable for describing highly distributed open systems. 2.4. Change The degree of parallelism is partially determined by whether the actor can change its local state or not. Actors which can change their local state are called serialized actors. A serialized actor accepts only one message at a time for processing; it will not accept another message for processing until it has dealt with the one which it has received. A communication received by a serialized actor is processed in order of arrival: although, not necessarily in the order sent. Actors which can never change their local state are called unseriulized actors. These actors are able to process arbitrarily many messages at the same time. Actors such as the square root function and the text of Lincoln’s Gettysburg Address are unserialized. Open systems do not have welt defined global states. Effects are implemented by an actor changing its own local state using a become command [Hewitt, Atkudi, Lieberman 791. There are no assignment commands. Our conceptual model of change contrasts with the usual computer science notion in which change is modeled by updating the state components of a universal global state (Milne, Strachey 761. The absence of a well defined 163 global state is a fundamental diff’crcncc between the actor model and claGca1 sequential models of computation (vi?. [‘furing 371, [Church 411, etc.) Actor systems can perjbrm ~~ontietermi~fistic computations jbr which there is no equivalent nondcterministic ‘Tirrinq AI&line [Clinger 811. ‘I he noncquivalcnce points up the limitations of attempting to model parallel systems as nondeterministic sequential machines [Milnc. Strachey 761. This is not to say that actor systems can implement functions which are not recursively computable (like solving the Halting Problem). Instead the point is that recursive functions do not provide an adequate model of parallelism. I.E. an Open System cannot be adequately modeled as a recursive function which maps global states to global states because at any given point in time an Open System does not in general have a well defined global state. Will Clinger has developed an elegant mathematical theory [Clinger 811 (called Actor Theory) which accounts for capabilities of actor systems which go beyond those of nondeterministic Turing Machines. We claim that nondeterministic Turing machines are an unsatisfactory model of the compututional capabilities of large-scale open systems. 2.5. Concurrency Concurrency in Open Systems stems from the parallel operation of an incrementally growing number of multiple, independent, communicating agents. Sites can join an Open system in the cdurse of its normal operation--sometimes even affecting the results of computations initiated befire they joined. Actor TheooF has been designed to accurately model the computaiionu! properties of Open Systems. It is a consequence of the Actor Model that purely functional programming languages based on the lambda calculus cannot implement shared accounts in Open Systems. The continuation technique promoted by Strachey and Milne [Milne, Sirachey 761 for simulating some kinds of parallelism in the lambda calculus does not apply to Open Systems. ‘The lambda calculus simulation is sequential whereas Open Systems are inherently parallel. Concurrency in the lambda calculus stems from the concurrent reduction/evaluation of various parts of a single lambda expression with an environment which is fixed when the lambda expression is created. In Open Systems independent agents can incrementally and independently spawn ongoing computations so the evaluation of an expression can be affected by actions which were initiated after the evaluation of the expression is begun. 2.6. Truth ml behavior Message Passing Semantics takes a different perspective on the meaning of a sentence from that of truth-theoretic semantics. In truth-theoretic semantics [T‘arski 441, the meaning of a sentence is determined by the models which make it true. For example the conjunction of two sentences is true in a model exactly in which both of its conjuncts are true in the model. In contrast Message Passing Semantics takes the meaning of a message to be the @ect it has on the subsequent behavior of the system. In other words the meaning of a message is determined by how it affects the recipients. F&h partial meaning of a message is constructed by a recipient in terms of how it is processed (cf. [Reddy 791). The meaning of a message is open ended and unfolds indefinitely far into the future as other recipients process the message. At a deep level. understanding always involves categorization. which is a function of interactional (rather than inherent) properties and the perspective of individual viewpoints. Message Passing Semantics differs radically from truth-theoretic semantics which assumes that it is possible to give an account of truth in itself. free of interactional issues, and that the theory of meaning will be based on such a theory of truth [Lakoff, Johnson 801. 3. Description and Action 3.1. Limitntions of Descriptions The distinction between doing and describing is different from the usual distinction made in the Artificial Inteliigence literature [McCarthy, Hayes 691 [Hayes 771 between the eprstonological adequacy of a system (its accuracy with respect to truth-theoretic semantics ITarski 441) and the heuristic adequacy (the @ziencv of its inferential proccdurcs in proving theorems). Thus the distinction between epislemological adcc~aacy and heuristic adequacy is founded on the basis of truth-theoretic semantics. Our simple example can help clarify the distinction. An cpistcnlologically adequate theory of financial accounts gives an accurate description of the rules that govern them. An heuristically adequate system can derive theorems in the theory of financial accounts fast enough to answer queries. Both kinds of adequacy are concerned with the description of financial accounts: the former with its accuracy; the later with the efficiency with which the description can be used to answer questions. Neither kind of adequacy actually accomplishes creating a shared account with $10 in it so a growing set of geographically dispersed users can make deposits and withdrawals. We could describe a certain kind of account with an axiom such as the following: ((d > 0) implies (replies-after-acceptance (a SharedAccount (with balance b)) (2 Deposit (with amount d)) (2 CompletionReport (with ResultingBalance (b + d))))) where the variables b and d are universally quantified and the predicate replies-after-acceptance takes three arguments which are a description of the local state of an actor when it accepts a request message, a description of the request accepted, and a description of the reply to the request. respectively. Now it should be clear that hypothesizing that ACCOUNT43 satisfies the following description: (a SharedAccount (with InitialBalance $20) (with Place BankOfLondon) (with Time "Midnight, December 31, 1987")) does not by itself actually create such an account. Indeed the above assertion is ambiguous between the following interpretations: - Hypothesis Interpretation: We have just discovered that ACCOUNT43 is already such an account and want to cxplicilly record this in the data base. - Goal Interpretation: We want to declare our goal of having ACCOUNT43 be such an account and we will devote some effort to establishing the goal. To actually create the account requires providing the money for the initial deposit in the account. Making assertions by itself will not suffice. Similarly asserting the ACCOUNT43 is not a SharedAccount does not in itself destroy an account which is located elswhere. 3.2. Taking Action Knowing that something is true and taking action to make it come true are tnro d$ferent things. Both are important and they should not be confused In this section we discuss how actors can take action to supplement the previous sections discussion of how they can manipulate descriptions. Actions such as creating a new shared account with balance $12 and owner Ken can be performed by evaluating an expression such as the one below in the programming language Act.2 [Theriault 831, [Lieberman 81a]: (new SharedAccount (with Balance $12) (with Owners {Ken})) Below we present Sharedaccount. part of the implementation (Define (new SharedAccount (with balance =b) (with owners =s)) (create (is-reauest (2 balance) & (reply (a SharedAccount (with balance b)))) (is-reouest (3 deposit (with amount =a) & (become (new SharedAccount (with balance (+ b a)))) (reoly (a deposit-receipt (with amount a))))) . . . . . * -**))I of At this point we would like to take note of several unusual aspects of the above implementation. The ACT2 language is an open system, i.e. each command and expression parses and evaluates itself. New commands and expressions can easily be added to the language. They are also actors, so they can execute in parallel. For example, in the communication handler for Deposit requests above, the following two commands are executed concurrently: (become (new SharedAccount (with balance (+ b a)))) (reolv (2 deposit-receipt (with amount a))) The principle of maximizing concurrency is fundamental to the design ofactor programming languages. It accoums for many of the differences with conventional languages based on communicating sequential processes. 4. Rchled Work The object-oriented programming languages (e.g. [Birtwistle, Dahl, Myhrhaug, Nygaard 731. [Liskov, Snyder, Atkinson, Schaffert 771, [Shaw, Wulf, London 771, and [Ichbiah 801) are built out of objects (sometimes called “data abstractions”) which are completely separate from the procedures in the language. Similarly the lambda calculus programming languages (e.g. [McCarthy 621, [I.andin 651, [Friedman, Wise 761, [Backus 781, and [Steele, Sussman 781) are built on functions and data structures (viz. lists, arrays, etc.) which are separate. SmallTalk [Ingalls 781 is somewhat a special case since it simplified Simula by leaving out the procedures entirely, i.e. it has only classes. The Simula-like languages provide effective support for coroutines but not for concurrency. In contrast the Actor Model is designed to aid in conceptual modeling of shared objects in a highly parallel open systems. Actors serve to provide a unified conceptual basis for functions, data structures, classes, suspensions, futures, procedure invocations, exception handlers, objects, procedures, processes, etc. in all of the above programming languages [Baker, Hewitt 771, [Lieberman 81b], [Hewitt, Attardi, Liebernlan 791. For example sending a request communication generalizes the traditional procedure invocation mechanism which requires that control return to the point of invocation. A request communication contains the mail address of a customer to which the response to the request should be sent as well as the message specifying the task to be performed. In this way, exception handlers [Liskov, Snyder, Atkinson, Schaffcrt 771, [Ichbiah 801 and co-routines [Uirtwistle, Dahl. Myhrhaug, Nygaard 731 are conveniently unified with other more general control structures. 7 he Actor Model unifies the conceptual basis of the lambda calculus and the object-oriented schools of programming languages--being malhematically defined, it is independent of all programming languages. Prolog based systems such ;IS Intermission [Kahn 821 and Concurrent prolog [Shapiro 831 arc pragmatically useful and 165 interesting expcrimcnts in their own right, Unfortunately, neilhcr as yet has a well dcvcloped malhcmatical semantics which would make it possible to clircctly analy~c them as WC have done for the lambda calculus and first order logic. To rile extent lhat Intermission and Concurrent Prolog are based on j?rsr order logic and the lumbda calculus, they inherit limitations discussed in this paper. 5. Conclusion The fundamental thesis of this paper is that knowing that something is the case and taking action to make it the case are two different things which should not be confused. Asserting a proposition P can mean that we have established that P is the case and want to explicitly record this in the data base. Or it can mean that we want to declare that P is our goal and we will devote some effort to planning how to do it. Neither of the above possibilities inherently involves taking any action. The capability to take action as well as to describe the world is very important. The relationships between description and action are quite subtle. We claim that actors (unlike lambda expressions, logical implications, etc.) are the universal objects of concurrent systems and that they can serve as a efficient interfuce between the hardware and sofiwure. Actors provide an absolute conceptual interface between the software and hardware of parallel computer systems. The function of the hardware is to efficiently implement the primitive actors and the ability to communicate in parallel. Software systems in turn can be implemented in terms of actors completely independently of the hardware configuration. The actor concept itself is defined mathematically and is thus logically independent of all programming languages and hardware architectures. A system consisting of multiple processors -- called the APIARY -- is being developed to use the inherent parallelism of actor systems to increase the efficiency of computation [Hewitt SO]. Message Passing Semantics deals coherently with both doing and describing whereas truth-thcorctic semantics only addresses sonic of the issues of describing. WC claim that it is impossible to implement shared resources for Open Systems using description systems such as first order logic and the lambda calculus because they lack the necessary communication capabilities. Description languages based on first order logic and/or the lambda calculus have been designed to express properties but are incapable of taking action. On the other hand procedural languages (such as current dialects of Lisp and Ada) have been designed to efficiently take action but they suffer from a lack of descriptive capabilities. We need good ways to integrate the roles of descriptions and actions in our systems. Some of the ideas in this paper have been applied to the analysis of the relationship between the roles of descriptions and actions in organizational work [Barber, de Jong, Hewitt 831. 6. Acknowledgments Much of the work underlying our ideas was conducted by members of the Message Passing Semantics group at MIT. We especially would like to thank Jon Amsterdam, Jerry Barber, Henry Lieberman, and Dan Theriault. Extensive discussions with our collaborators Elihu Cerson and Leigh Star have been of fundamental importance in improving the organization and content of this paper. A chat with Marvin Minsky and Danny Hillis helped to clarify the nature of the differences between actions and descriptions. The development of the Actor Model has benefited from extensive interaction with the work of Jack Dennis, Bob Filman, Dan Friedman, Bert Halstead, Tony Hoare, Gilles Kahn, Dave MacQueen. Robin Milner. Gordon Plotkin, Steve Ward, David Wise over the past half decade. The work on Simula and its successors SmallTalk. CLU, Alphard, etc. has profoundly influenced our work. We are particularly grateful to Alan Kay, Peter Deutsch. Laura Gould, and the other members of the Learning Research group for interactions and useful suggestions. This paper describes research done at the Artificial lntelligcnce Laboratory of the Massachusetts lnstitute of Technology. Major support for the research reported in this paper wps provided by the System Development Foundation. M:ljor support for other rclatcd work in the Artificial in tclligence Laboratory is provided, in part. by the Advanced Research Projects Agency of the Department of Dcfcnse under Office of Naval Research contract N0014-80-C-0505. We would like to thank Charles Smith, Mike Brady, and Patrick Winston for their support and encouragement. References [Attardi, Simi 811 Attardi. G. and Simi. M. Semantics of Inheritance and Attributions in the Description System Omega. Proceedings of IJC.41 81, IJCAI, Vancouver, B. C., Canada, August, 1981. [Backus 781 Backus, J. Can Programming be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs. Communicalions of the ACM 21, 8 (August 1978), 613-641. [Baker, Hewitt 771 Baker, H. and Hewitt. C. The Incremental Garbage Collection of Processes. Conference Record of the Conference on AI and Programming Languages, ACM, Rochester, New York, August, 1977. [Barber, de Jong, Hewitt 831 Barber. G. R., de Jong, S. P., and Hewitt, C. Semantic Support for Work in Organizations. Proceedings of IFIP-83, IFIP, Sept., 1983. [Birtwistle, Dahl, Myhrhaug, Nygaard 731 Birtwistle, G. M., Dahl, O-J., Myhrhaug, B., Nygaard, K. Simula Begin. Van Nostrand Reinhold, New York, 1973. [Bobrow, Stefik 821 Bobrow, D. G., Stefik, M. J. Loops: An Object Oriented Programming System for Interlisp. Xerox PARC, 1982. [Borgida, Mylopoulos, Wong 821 Borgida, A., Mylopoulos, J. L., Wong, H. K. T. Generalization as a Basis for Software Specification. Perspectives on Conceptual Modeling, Springer-Verlng, 1982. [Borning, Ingalls 821 Borning, A. H., Ingalls, D. H. Multiple Inheritance in Smalltalk-80. Proceedings of the National Conference on Artificial Intelligence, AAAI, August, 1982. [Church 411 Church, A. The Calculi of Lambda-Conversion. In Annuls of Muihemutics Studies Number 6, Princeton University Press, 1941. [Clinger 811 Clinger, W. D. Foundations of Actor Semantics. AI-TR- 633, MIT Artificial Intelligence I,aboratory, May, 1981. [Curry, Raer, Lipkie, Lee 821 Curry, G.. Bacr, L., I,ipkie. D.. Lee, B. Traits: An Approach to Multiple-Inheritance Subclassing. Conference on Office Information Systems, ACM SIGOA, June, 1982. [Friedman, Wise 761 Friedman, D. P., Wise, D. S. The Impact of Applicative Programming on Multiprocessing. Proceedings of the International Conference on Parallel Processing, ACM, 1976, pp. 263-272. [IIayes 771 Hayes. P. J. In Defense of Logic. Proceedings of the Fifth International Joint Conference on Artificial Intelligence, Cambridge, Ma, 1977, pp. 559-565. [Ifewitt SO] Hewitt C. E. The Apiary Network Architecture for Knowledgeable Systems. Conference Record of the 1980 Lisp Conference, Stanford University, Stanford, California, August, 1980. [Illewitt, Attardi, Lieberman 791 Hewitt C., Attardi G., and Lieberman H. Specifying and Proving Properties of Guardians for Distributed Systems. Proceedings of the Conference on Semantics of Concurrent Computation, INRIA, Evian, France, July, 1979. [Ilewitt, Baker 771 Hewitt, C. and Baker, H. Laws for Communicating Parallel Processes. 1977 IFIP Congress Proceedings, IFIP, 1977. [Ichbiah SO] Ichbiah, J. D. Reference Manual for the Ada Progmmmijzg Language. November 1980 edition, United States Department of Defense, 1980. [lngalls 781 Ingalls D. The Sma!lTaIk-76 Programming System, Design and Implementation. Conference Record of the Fifth Annual ACM Symposium on Principles of Programming Languages, ACM, Tucson, Arizona, January, 1978. [Kahn 821 Kahn, K. M. Intermission - Actors in Prolog. In Logic Progrtrmming, Academic Press, 1982. [Lakoff, Johnson 801 Lakoff, G., Johnson, M. Metaphors We Live By. The University of Chicago Press, 1980. ]I ,:mndin 651 I,andin, P. A Corrcspondcncc Between ALGOL 60 and Church’s Lambda Notation. Communiccltion of fhe ACM 8,2 (February 1965). [Lieherman 81a] Lieberman, H. A Preview of Act-l. A.I. Memo 625, MIT Artificial Intelligence Laboratory, 1981. [Lieberman 8lb] Lieberman, H. Thinking About Lots ofThings At Once Without Getting Confused: Parallelism in Act-l. A.I. Memo 626, MIT Artificial Intelligence Laboratory, 1981. [Liskov, Snyder, Atkinson, Schaffert 771 Liskov B., Snyder A., Atkinson R., and Schaffert C. Abstraction Mechanism in CLU. Communications of the ACM 20, 8 (August 1977). [McCarthy 621 McCarthy, John. LISP 2.5 Programmer’s Manual. The MIT Press, Cambridge, Ma., 1962. [McCarthy, Hayes 691 McCarthy, J. and Hayes, P. J. Some Philosophical Problems from the Standpoint of Artificial Intelligence. In Machine Intelligence 4, Edinburgh University Press, 1969, pp. 463-502. [Milne, Strachey 761 Milne, R. and Strachey, C. A Theov of Programming Languages. John Wiley & Sons, New York, 1976. [Reddy 791 Reddy, M. The Conduit Metaphor. In Metaphor and Thought, Ortony, A., Ed., Cambridge University Press, 1979. [Reiter 821 Reiter, R. Towards a Logical Reconstruction of Relational Database Theory. Perspectives on Conceptual Modeling. Springer-Verlag, 1982. [Shapiro 831 Shapiro. E. A Subset of Concurrent Prolog and Its Interpreter. Technical Report TR-003, ICOT, January, 1983. [Shaw, Wulf, London 771 Shaw, M., Wulf, W. A., London, R. L. Abstraction and Verification in Alphard: Defining and Specifying Iteration and Generators. Commwications of the ACM 20, 8 (August 1977). [Steele, Sussman 781 Steele G. I,. Jr., Sussman, G. J. The Revised Report on SCtiEM E: A Dialect of LISP. Al Memo 452, MIT, January, 1978. jTarski 441 Tarski, A. The Semantic Conception of Truth. Philosophy and Phenomenological Research 4 (1944), 341-375. [Theriault 831 Theriault. Issues in the Design and Implementation of Act2. Master Th., Massachusetts Institute of Technology, 1983. [Turing 371 Turing, A. M. Computability and h-definability. Journal of Symbolic Logic 2 (1937), 153-163. [Weinreb, Moon 811 Weinreb, D. and Moon D. LISP Machine Manual. MIT, 1981. 167
1983
35
228
THE DECOMPOSITION OF A LARGE DOMAIN: REASONING ABOUT MACHINES Craig Staafill Department of Computer Science University of Maryland College Park, Maryland 20742 ABSTRACT Pneumatics The world of machines is divided into a hierarchy of seven sub-worlds, ranging from algebra to causality. Separate representations and experts are constructed for each sub-world; these experts are then integrated into an expert system. The result is Mack, a system which produces qualitative models of simple machines from purely geometric representations. 1 -* Introduction Machine World is a universe consisting of sim- ple mechanical devices. These devices operate according to the ideal gas laws plus Newton's laws of motion. Thus, we will be studying a universe in which the pressure of gasses causes things to move, and in which motion causes pressures to change. We will examine the process of understanding what a machine does. Specifically, we will start with a geometric description of the parts of a machine, and produce a qualitative description of how it works. In this work, we will pay special attention to the division of Machine World into smaller sub- worlds as a means of controlling the complexity of the system. 2. Decomposition of Machine World --- Mack's representations use 53 different primi- tives, support 201 different queries, and require 750 different rules. In order to control the com- plexity of the knowledge base, we decompose it into seven disjoint domains: Algebra, Linear Geometry, Solid Geometry, Shape, Mechanics, Pneumatics, and Qualitative Relations. Each domain is defined by a set of primitives and a set of queries, and is cap- tured by a set of rules for answering these queries. These domains are related by a semantic hierarchy in which objects from high-level domains are defined in terms of objects from low-level domains. The experts for the high-level domains access the low-level domains through queries. This research was funded by NASA's Goddard Spaceflight Center under contract NAS5-25764 and by NSF under grant NSFD-MC5-80-18294. Their support is gratefully acknowledged. Shape i’ I Solid Geometry Mechanics I Linear Qualitative Geometry Relations Algebra The Semantic Hierarchy 1. Algebra World represents numbers, algebraic -- expressions, and inequalities. The expert for this world is essentially a micro-MACSYMA [Bogen et al 751. 2. Linear Geometry World is a world of lines, points, and vectors represented by 3-tuples of algebraic expressions. 39 - Solid Geometry World is a world of primitive solids and surfaces defined in terms of points, vectors, and algebraic expressions. For example, a cylinder is defined by two points and a radius. 4. Shape World represents shapes as the sum, -- difference, and intersection of primitive shapes. For example, a cylinder with a hole in it is represented as the difference of two cylinders. 5. Mechanics World represents accelerations, forces, and motions in terms of vectors and expressions. Several researchers ([Hayes 791, [Novak 761, [de Kleer 751) have studied the representations and reasoning techniques which go into this domain. 6. Pneumatics World represents chambers (areas which contain gas) in terms of shapes and pressures. 7. Qualitative Relations World represents cause and effect as qualitative relations between variables. For example, the tendency of the 387 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. gas in a chamber to cause a piston to move is represented by a relation between the pressure of the gas and the acceleration of the piston. The formulation which is used by Mack is derived from Ken Forbus's Qualitative Process theory [Forbus 821 and other qualitative rea- soning schemes ([Kuipers 821 [de Kleer 753). It is worth noting that many of the domains mentioned above have already been studied. These earlier systems are superior to Mack within their own fields; the problem which Mack solved was integrating these representations. This ability to build on existing representations is one advantage of Mack's domain oriented architecture. Another advantage of this architecture is that it provides a good guide for the construction of the experts. For example, when we decide to create an Algebra expert, we have a good idea of what needs to go into it: we will need primitives for representing expressions, procedures for simplify- ing them, and some means of comparing them. Thus, we can determine what primitives we will need, what queries we will need to support, and we know when we have written enough rules. Another advantage of this architecture is that it allows us to experiment with representations. There are many cases where, in the course of system development, we find that our representations are inadequate. Because the internal structure of the representations are hidden from the other domains, we need only modify the faulty domain. Finally, by decomposing a domain we learn about its structure. For example, we decomposed Machine World into these seven sub-domains, and found that a simple relation - the semantic hierar- chy - described its structure. We have also exam- ined (in work not fully reported here) the 'fine structure' of some of these domains. For example, Mathematics is composed of a series of 'layers': Arithmetic, Algebra, Trigonometry, and so forth; each of these layers is an extension of the one that lies under it. We have also probed the role of foundational knowledge, such as knowledge about logic, by examining how such knowledge is integrated into the system. These topics are fully developed in [Stanfill 831. 3. Comprehension In order to comprehend a machine, Mack con- structs a sequence of progressively more abstract descriptions of the machine. Specifically, Mack starts with a description of the shapes of the parts of the machine, and produces an abstract, qualitative model of the machine. To do this, it creates models describing motions, forces, chambers, and accelerations. We implement this reasoning strategy as a set of model experts, each of which knows how to create a specific model. Model creation proceeds in four steps. First, the model expert obtains any low- level models on which it depends. Next, it asks representation experts to extract features from these lower level models. Next, the expert exam- ines these features and constructs a set of objects for its model. Finally, the representation expert for these new objects simplifies the model. This architecture separates knowledge about how to create models from knowledge about represen- tation. This is important for the conduct of experiments: we tend to modify the techniques used to create the models as we learn more about under- standing machines, but the representations which we use and the queries which we ask about those representations are relatively stable. We will now consider the actual sequence of models which Mack produces in the course of under- standing a machine. This sequence of models is related by the following epistemological hierarchy, in which high-level models are inferred from low- level models. Static The Epistemological Hierarchy Mack begins with a description of shape of the parts of the machine. This is the Static Model of the machine. -- From this, it constructs a model of how the parts of the machine can move. These motions are derived from the manner in which these parts touch. Mack discovers that the piston touches the block in a cylindrical surface, which allows the it to slide left to right and to rotate about its axis. Mack then creates variables, X-l and THETA-l, to parameterize these motions. The result is the Kinematic Model of the machine. THETA-I X-l Mack next models the behavior of gasses in the Machine. Any area which is not occupied by a part 388 of the machine is assumed to contain gas. Thus, Mack discovers two disjoint areas which contain isas, and models each of these as a separate chamber. machine. The result is the Pneumatic Model of the This example took 5 minutes on a VAX 11/78O with 4 M-bytes of memory, running Franz Lisp under the Berkeley 4.1 Unix (tm) operating system. The following mechanisms have also been understood by Mack: Mack now constructs a model of forces due to the pressures of gasses. It does this by finding the surface where each part in the Static Model touches a chamber in the Pneumatic Model. For each such surface, it infers a force, the direction of which is determined by the orientation of the sur- face and the magnitude of which is determined by the area of the surface times the pressure in the chamber. Finally, the mechanics expert adds forces acting on the same object. The result is the Force Model. Mack now looks for accelerations. For each force in the Force Model, it discovers the motions (in the Kinematic Model) which it causes. In this case, Mack finds that the rightward-directed force on the piston causes the rightward motion of the piston, measured by X-l. Mack then models these accelerations. The result is the Acceleration Model. X-1" " (Pr-1 - Pr-2)*253 Mack's final step is to extract qualitative rela- tions from the Acceleration Model and the Pneumatic Model. It examines the Acceleration Model, and discovers that the rightward acceleration of the piston (X-l”) is positively influenced by the pressure to the left of the piston (Pr-1) and nega- tively influenced by the pressure of the Earth's atmosphere (Pr-2). Mack next examines the Kinematic and Pneumatic models and discovers how motions affect the volumes of chambers. It finds that, when the piston moves right (X-l increases), the volume of the chamber to the left of the piston (Chamber-l) increases; this causes the pressure of the gas (Pr-1) to fall. Finally, it notes that the volume of the Earth's atmosphere (Chamber-2) is infinite, so its pressure (Pr-2) is constant. The result is the Process Model. -- Two Bearings PistOf) 4. ACKNOWLEDGEMENTS Thanks to Randy Trigg, Liz Allen, John Bane, and Mark Weiser for reading drafts of this paper, to the Maryland AI Group for general support, and especially to Chuck Rieger for advising me in this work. APPENDIX In the interest of readability, we have gen- erally ommitted the actual representations which Mack used, and explained Mack's reasoning with pic- tures where practical. For those who want to see the actual representations, this appendix presents the actual input and output of Mack, unretouched except for the breaking up of long lines of output and similar textual adjustments. Mack's input consists of a command to create a model. This command contains the name of the model, the name of the root of an expert hierarchy, and a set of symbols to be defined in the various sub-domains. The comand below defined the piston explained in the main part of the paper. (model Sl master (lin3 pl (POINT (0 0 0))) (lin3 p2 (POINT (lo 0 0))) (lin3 p3 (POINT (20 0 0))) (lin3 p4 (POINT. (30 0 0))) (lin3 p5 (POINT (70 0 0))) (geo cyl-a (CYLINDER ipl ~5) 15)) (gee cyl-b (CYLINDER {p2 ~5) 5)) (gee CYI-c (CYLINDER Ip3 ~4) 5)) (shape A (S-MINUS cyl-a cyl-b)) (shape B cyl-c) (Q+ Pr-1 X-l”) (Q- X-l Pr-1) (Q- Pr-2 X-l") (Constant Pr-2) (master machine {A B}) 389 Next, we gave the root-level expert the query "(all-models machine)". The following output was produced after 5 minutes of computation. The out- put consists of two parts. First, there is a list of all the models. Second, some symbols referenced in the first part are defined. As a detailed explanation of these representations is beyond the scope of this paper, the interested reader is referred to [Stanfill 831. (Static machine Kinematic {(MOTIONS A B (sEQ ((TRANSLATION (VECTOR (1 0 0))) x-i) ((ROTATION (RAY (POINT (20 0 0)) (VECTOR (1 0 0)))) THETA-~) )>I Pneumatic {(CHAMBER Shape-4 Pr-2) (CHAMBER (CYLINDER {(POINT (10 0 0)) (POINT (20 0 0))) 5) Pr-1 11 Force {(FORCES B ((FORCE mm;;R’30 0 0)) ([A-PLUS [A-TIMES 25 PI Pr-21 CA-TIMES -25 PI Pr-l]] 0 0) >)I) (FORCES A [(FORCE y;RmR(O 0 0)) (CA-Pws CA-TIMES -25 PI Pr-21 CA-TIMES 25 PI Pr-111 0 0) ))I)) Dynamic {(ACCELERATIONS A ((ACCELERATION X-l [A-PLUS [A-TIMES -25 PI Pr-21 CA-TIMES 25 PI Pr-111 11) (ACCELERATIONS B ((ACCELERATION [A-PLUS CA-TIMES -25 PI Pr-23 [A-TIMES 25 PI Pr-111 )))I Qualitative {(CONST Pr-2) (I+ Pr-1 (d2 X-l)) (I- X-l Pr-1))) (machine = {A B1) (B = (CYLINDER ((POINT (20 0 0)) (POINT (30 0 0))) 5)) (A = (S-MINUS (CYLINDER ((POINT (0 0 0)) (POINT (70 0 o))l 15) (CYLINDER {(POINT (10 0 0)) (POINT (70 0 0))) 5) 1) (Shape-4 = [S-PLUS (CYLINDER {(POINT (30 0 0)) (POINT (70 0 0))) 5) (S-COMPLEMENT (CYLINDER {(POINT (0 0 0)) (POINT (70 0 0))) 15) )I) (Pr-1 = (OPEN 0 pos-inf)) (X-l = 0) (Pr-2 = (OPEN 0 pos-inf)) (THETA-~ = 0) REFERENCES [Bogen et al 751 Bogen, R., MACSYMA Reference Manual, Labora- tory of Computer Science, Massachusetts Insti- tute of 1975. Technology, Cambridge, Massachusetts, [Forbus 823 Forbus, K., Qualitative Process Theory, MIT AI Memo 664, February 1982. [Hayes 791 Hayes, Patrick, The Naive Physics Manifesto, in Expert Systems in the Microelectronic Age, ed. D. Michie, Edinburgh University Press, May 1979. [Kuipers 821 Kuipers, Benjamin, Getting the Envisionment Right, pp. 209-212 in Proceedings of AAAI-82, August 1982. [Novak 761 Novak, G., Computer Understanding of Physics Problems Stated in Natural Language, Technical Report NL-30, University of Texas at Austin, 1976. [Stanfill 831 Stanfill, C, PhD Thesis, in preparation, Department of Computer Science, University of Maryland, 1983. [de Kleer 751 de Kleer, J., Qualitative and Quantitative Knowledge in Classical Mechanics, M.I.T. AI- TR-352,-1975. 390
1983
36
229
Finding Ali of the Sczlertions to a Problem David E. Smith Computer Scicncc Department Stanford University Stanford. California 94305 This paper dcsctibcs a method of cuttq off reasoning when all of the MWC~S to a problem hnvc been found Briefly the method involbcs keeping and maintaining information about the G/es of important SCIS. did u4ng this information to determine when all of the an<wels to a pr&lcm have bc:n found VY’C show how thix informution c?n be dynamically calculated and kept nccuratc in a changing world. Additional complcnity is encoumcred when this malntcnxu is miucd with independent mcta-level rca,oning for pl tming search spaces 1. Introduction Suppose that you were asked to find the names of the parents for a well known person such as Jerry l~rown. Perhaps you already know that Edmund Thrown, Sr. is Jerry’s father (and by implication, one of Jerry’s parents). Some additional digging in a reference book would dctetmine Jerry’s mother, and hence another of Jerry’s parents. But why stop hcrc? Perhaps digging hnrdcr in the library would turn up a few more of Jerry’s parents. WC have encountcrcd problems with these ch‘uactcristics in tho construction of scvcral expert systems: a system for modcUing renal physiology (Kun/, 1983), a miniature tax consultant (Bal tics, 1382), ,md an intclligcnt front end for computer systems (l:cigcllbaum. 1980). What is required in ail of these ex,tmplcs is the ability to reason ahuut the number of solutions to problems and to shut off the reasoning process when all of the solutions to a problem have been found. III section 2 a basic problem solving architccturc is introduced. It makes use of knowlcdgc about the number of solutions to problems in order to halt reasoning. Mails of specifying, deriving, and In,lintaining this knowledge are covered in section 3. ‘l’hc subject of section 4 is the rather surprising difficulty that arises when indcpcndcnt Incta-lcvi’l reasoning is used to prune the scar& sp,rcc for a problem. Preliminary results arc discussed in section 5 and lelarionship to other work is discussed in the final section. 2. Basic Architecture I.ct us s~~pposc, for the moment, that a problem :al~ing system has knowlcdgc about the number of solutions to any problcln it might bc given. Gi\cn such information it is a rclati\cly +iplc rnattcr to modify a typical system to take ndvantdgc of t!lc infornl;rtion. Norm.rlly a problem solver (,lttcmpting to find ,111 of the solutions to a problem) charges blindly nhcad, di~co~cring solutions and coIl~‘cting ~)r reporting them. It stops when it runs out of pos+ble infclcnccs. Instead, our “smart” problem solver would first ,~sk the mcta Lel quc~tion “how many solutions does this problem ha~c?” I lien, j.\!lilc solving the problem, it keeps a count of the number of sollltion\ li)tlnd. Whcucvcr this count rcachcs the total number cxpcctcd, the problem solver C,III stop. A flow chart of this proccdurc :lppcnrs in figure 2- 1. l’igurc 2-1: lXagr,lln ol’,~ problem >olvcr III I<ngli\h it is simple m)~igIi to 5tdtc that a pciwri ii3 only t;i’o parcilts or that a quadratic cquati,)n has two solutioils. IlLit to cxpwx this knowlcdgc in ‘I li~~?l ::uitAlc for USC by a problclll solver, ;I prccic c lai~g~qy is ricccssary. 373 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. 3.1. The Language I~Px-lcvcl and mcta-lcvcl propositions will IX cxprcsscd in the Innguagc of prcdicatc calculus, but the techniques described hcrc arc not dcpcndcnt on this choice. Any other language with sufficient cxprcssivc power would do as well. Several syntactic cnnvcntions arc used to simplify the cxamplcs. Upper case Icttcrs arc used cxclusivcly for constants, functions, and relations. I.owcr cast letters arc used for variables. All free variables arc universally qunntificd. I\raccs arc used to dcnotc sets (e.g. (1.3.5)) and angle brnckcts are used to denote ordered tuplcs of objects, (c.g. x1,3,5>). When it is necessary to refer to a base-level expression, wc will cnclosc it in quotation marks, c.g. PROVABLE( ' FATHER(A,E) * ). Variables occurring within quotation marks can be assumed to bq mcta-variables, i.c. they range over cxprcssions in the base-lcvcl language. From PROVABLE('FATHER(x,y)'), it iS legal to infer PROVABl.E('FATIIER(A,B)'). 3.2. Xncwledge About Set Sizes A query to a problem solver (in which all the answers arc dcsircd) can bc charactcrizcd in general as: &i ~111 W/lm for /he vtiriub~es u Illat salisfil the. proposilion Q. Given qucrics of this form, the first step for the problem solver is to: fiizdn SrK'h tkl n=NUMBERoF(v,p), whcrc! NUMBEROr is a lncta-kvcl fllKtiOn symbol (refcrciltkilly OpaqiC) that denotes the number of solutions to a problem. Using this clcmcntar’y vocabulary it is easy to state that the problem of finding Jerry lsrown’s parents has only two solutions: EuMBERor(*y',' PARENT(JERRY,y)') = 2 (3-l) or more gcncrally that all such problems have only two solutions: NUMBEROF('y', 'PARENT(x,y)') = 2 (3-2) If this inl-ormation were provided to our plublcm solver, it would be able to stop reasoning after finding both of Jerry’s parents. It would be a nuisance. howcvcr, if WC had to specify this mcta-knowlcdgc directly for cbcry problem a system might encounter. Fortunately, such information crtn bc derived rrom simple base-lcvcl knowledge about the 5iLcs 0T sets. If the mcmbcrs of a set correspond to the solutions of a prc~blcn~, mcl the cardinality of the set is known. then the number of solutions to ~hc problem will bc the same as the cardinality of the set. A formal statcmcnt of this “connection” axiom is: ~~(TRUE('V~S CJ Q') & CARD(s)=c) M NUMBEROF(v.p)=c (3-3) Equation (3-4) shows two examples of base-level cardinality inf’ormalion. Using such hforinahJl1, and rlic: connection axiom given ,~hc)vc our simple problem solver cm czily infer the number of :,olutiolls to each of UIC corresponding problems, enabling it to halt infcrcncc when all of the solutions hnvc been found. CARD(PARENTS(x))=Z (3-4) yEPARENTS - PARENT(x,y) CARO(SHIPSINPOSTON)=14 x~SHIPSJPIBOSTON tj SHIP(x) & DOCKED(x,BOSTON) It i$ important to note that using plural relationships and functions, like PARENTS instead of PARENT, would not nllcviatc the need for having information about the size of a set. For example, knowing that: EDMUE!DEPARENTS( JERKY) & AGNESEPARENTS(JERRY) (3-5) is not enough. It is also ncccssary to have the information that Jerry doesn’t have any other parents, a stntcmcnt isomorphic to the proposition about lhc size of the set. Scch sLatcmcnts arc often left implicit in propositions like: PARENTS(JERRY) = {EDEIUND.AGNES) (3-6) As a final note, thcrc arc two special casts whcrc information about the number of sollltions to a problem can bc dcrivcd without domain specific information about set G/cc. If the problem is to find out whcthcr or not a proposition is Iruc, and the rcquc\tor doesn’t cart ahout any of the \arinblc bindin&c, then thcrc is (at nmct) one solution to the problem. A co!nmon cnsc of rl:is is when !hc proposition Q is d ground clause, i.e., coutaiii:, no vnriablcs. NUMBEROF(<>,p) _( 1. (3-7) l:unctional exprc4sions also h.!vl> at most one solution. This can bc cxprcsscd as: GROUND(f) & VARIABLE(z) = NUMBEROF(z,'f=z') < 1 (3-8) 3.2.1. Gatherir:~ Xnowledgo about Set Sizes Dynamically 'i‘hc~c~ arc nlany i;ist,1nccs in which tllc ztldition of a priori knowlcdgc about the si/cs of :,ets i\ sufficient to sr)l~c lilt prcjblclils in the dolnnin of intcrcst. For cx;lrnplc, in a s>stcnl modclling human physiology, the nulrtber of heart chnmbcrs, the I;umbcr of bones in the hand, and the number of major artcrics arc all uiilikcly to change. ‘I‘licrc arc ala0 many circumstances in which it is cit!:cr inconvci-icnt or impossible to supply that knowlcdgc. It would bc unreasonable to require that a sy\tcm builder r,upply the cardinality of scvcral thousand sets, each having thousnnd~ 01‘ incnibcrs. Corisitlcr. for cxamplc, the number of crr~plc~~ws for each of the dcpartmcnts in ;I large corporation. It would bc more than incon\rcnicnt if this counting had to be done monthly, weekly, or daily as the systctns data base was upgraded. The situation is worst when 21 p’oblcm solver is dealing with a COilSt;iIltly changing woLld. Consider an cxpcrt system that scrvcs as an intclligcnl intcrfacc fbr ,m opcrnting system. 'I‘llC ItlldiigPtlf &WI lnlrst accept rcqucsts from the user. make appropriate pl,~ns for their rc~lli~ation, perform the ncccssary system specific actions, and report results to the user. Knowlcdgc about the number of tape drives and printers nttachcd to the machine is (relatively) static and can be specified a priori. Rut the number of files in a given directory changes frcqucntly and cannot bc specified beforehand. In this volatile cnvironmcnt thcrc is still much that can be done. For problems where the cost of solution is high and the problem is often repeated, it is worthwhile to cache information about set si/.cs. To illustrate the technique, suppose that an i~~~elligetz/ aget?/ has the task of finding all of the SCRIBE files on some particular directory (perhaps as a subproblcm of transporting them to a IICW machine). The query is: jtddf suck fkat FORMAT(f,SCRIBE). For purposes of illustration, assume the intclligcnt agent bclicvcs that a file contains SCRIBE text if the file's extension is MSS, or if the file contains an “@make” statement at the beginning. This is stated formally as: NAME(f,n) & EXT(n,MSS) 3 FORMAT(f,SCRIBE) (3-9) CONTENTS(f,c) 8, WORD(l,c.@MAKE) * FORMAT(f.SCRIBE). In this case it is not reasonable for the system to have a priori knowledge about the number of SCRIBE files in a particular directory. It would, however, be simple enough for the system to cache this information after the problem has been solved once. Suppose then, that such a query is posed to the intclligcnt azcnt, and it cxhustively hunts through all of the files in the directory. Five SCRIBE files are found, three with the file cxtcnsion MSS md two which do not have the usual extension, but contain “@make” stntcments at ~hc beginning. After collnting all of the ;rn\wcrs, the prol)lcnl solvc~ can cache the knowlcdgc that thcrc arc cxnctly five such SC’RII~T files (and hcncc five answers to the problem): 374 CAPD(SCRIEEFILES) xgSCRiflEFILES e) =5 FORMAT (3-10) (f ,SCRIBE) If the query is posed again, (perhaps as n subgoal of some other rcqucst) then, according to the simple problctn solving schcmc of section 2, the cnchcd information about the number of SC’IIIIH‘: files (3-10) will stop the problem solving proccs\ once those five SC‘t~IIIf: files are found. If the BIISWCI'S to the problem arc also cached, then the solutions to any subsequent query would bc found i1nmcdiatcly (by looking in the data base). In this case the problem solver would not have to dic;scct a single filcn~nc or look at the contents of a single file. Counting rhc numhcr of solutions to ;I prol~lcni, inferring the si;lc of the Set. and caching this knowlcdgc permits ;I quicker rcsponsc to future queries. just as the prcscncc of ;i priori knowlcdgc about set :,itc would. As wc suggcstcd in the previous section, instead of caching the cnrdinfllity of the set of SCI<IIU: files. it is also possible to cache the fact that c~h of the tivc files is a mcmbcr ol‘tl~c set of ScI~mr~ files, and that thcrc arc no other such files. ‘I’hc difficulties of keeping this knowlcdgc accur;i!c (the sribjcct of the next section) ‘II’L’ the saint for cilhcr ofthcsc rcprcscntations. 3.2.2. Maintaining the Accuracy of Cardinality Information l’here is only one problem with the schlc of section 3.2.1: succeeding requests to the intclligcnt agent might cause files to be added, dclctcd or modified. Cached knowledge about set sizes could therefore bccomc invalid, just as cached solutions might become invalid. A standard solution to this dilemma is to keep justifications for cached information. and use lrufll t~itt~etmcc (I)oylc, 1979) to remove assertions that bccomc invalid as a result of other actions. l‘he same tcchniquc can bc used to keep track of cached knowledge about set sizes, provided that justifications arc kept for thcsc statcmcnts. In the ex:nnplc of the previous section, the statement “thcrc are five SCRIBE files” rested on the assertions that there arc three files with the extension MSS, and two files that begin with “@?makc” statements. ‘I’hcsc strltemcnts, and their justifications can be recorded formally: CARD(SCRIBEFILES) = 5 (3-11) JUST(‘CARD(SCRIBEFILES) = 5’, {‘CARD(MSSFILES) = 3’. 'CARD(@MAKE-FILES) = 2')) CARD(MSSFILES) = 3 JUST('CARD(MSSFILES)=3' .('CARD (FILE-NAME-PAIRS)=237'}) CARD(FILE-NAME-PAIRS) = 237 CARD(@MAKE-FILES) = 2 JUST('CARD(@MAKE-FILES)=2', {'CARD(FILE-CONTENTS-PAIRS)=237'}) CARD(FILE-CONTENTS-PAIRS) = 237 Given these justifications, kno\vledgc about set sizes can bc kept accurate when the data base is changed. For example, suppose that the intclligcnt agent is instructed to dclctc the file named BOOK.MSS. Assuming that the intelligent agent was caching solutions as well as information about set size, its data base might contain: NAME(FILE37,BOOK.MSS) (3-12) EXT(t3OOK.MSS.MSS) FORMAT(FILE37,SCRIBE) JUST('FORMAT(FILE37,SCRIBE)' ('NAME(FIlE37.BOOK.MSS)','EXT(BOOK.MSS.MSS)')) /2fter perfolming the deletion the intclligcnt agent must update its data base to reflect the cffccts of the action. ‘I’hc fact "NAWE ( F ILE37 ,600K. MSS) ” is removed, and then, using truth in,iinlcnancc, t!lc fxt "FORMAT(FILE~~,SCRIBE)" is removed since it tlq~ervh upon the former. In addition, the following infcrcncc must be made: SiilCC t.!lC hct NAME(FILC37,BOOK.MSS) !lllS been rcmovcd, t!lC I;lct ;~bout the number of lilt-name pairs is no longer valid and must bc rcmovcc!. Ijy removing this cached cardinality assertion, the cntirc tree nfcacl~d cardin,!lity knowlcdgc unr~~vcls, and the statement that there arc five SCRlnll files will be removed. ‘I’his itlClllOdOlO~y is obviously not without cost. While the ndtlitionnl reasoning rcquircd is minimal, caching all of the set size information and the ascocintcd justifications dots rcquirc some storage. It is, howcvcr, often much less than the storage overhead of caching base-lcvcl f:rcts and their justifications. If a problem has two hundred answers, then caching the rcsul~c dcmnrldc storing at Icast two hundred propositions and two hunclrcd justifications (many more if intcrmcdiatc results arc dcrivcd and cached). In contrast, only one cardinality asscllion and its justification need be cached for each subproblem encountered. 3.2.3. Invariance AS a pr‘lctical matter, it is often posciblc to cut down the overhead of maintaining information about set si/cs by specifying i0\10Gctrrce for static quantities. In thz cast of the intclligcnt agent the number of tape drives usually doesn’t change. ‘l‘hus WC CoilId specify: INV.ARIANT('CARD(TAPEDRIVES)=n') (3-13) If a cached statcmcnt is itrmviant (will not change over time) then thcrc is no need to keep its justification around. ‘I’hus if a problem solver dynamically calculatcc the siyc of a set, but finds that thcrc is an invariance statement (like those above) then the system can cache the statement and use it without storing clabomtc justifications. 4. Interaction with other M&a-level Reasoning It would be nice if all were as rosy as WC h,1vc made it out to be in section 3. Ilnfortunntcly, there arc so1nc bcry subtle intcrdcpcndcncies that can arise bctwccn statements about the number of solutions, and indcpcndcnt mcta-level reasoning used to prune starch spnccs. WC will first give a somewhat sketchy description of the difficulty and then illustr,ltc the problem with an example. Suppose that a problem solver is attempting to find the solutions fol a problc1n P. In so doing. it gcncratcs a sul,problcm Q. which gcneratcs an additional subproblcm R. Further :upposc that, due to indcpcndent mcta-level reasoning (the tlctails arc not rclc\ant) WC decide that the sub-sub-problem R cannot possibly contribute any new solutions to the overall problem of finding the solutions to P. As a result the sub-sub- problem R is prutvdand the problem solving proceeds. / I ’ Q ’ I \” *.’ R . . . . . IGgurc 4- 1: Simple Subproblcm l’rcc Consitlcr what would have happcncd if WC wcrc keeping track of, and c,lching, the number of solutions to c,!ch of thcsc sul~l~~~l~lcms, (as in sections 3.2.1 and 3.2.2). Suppose that thcrc ale tllrcc soh;iio1ls to the problem R, five solutions to the problem Q, and icn solutions to the original problem P. 7’hc number of solutions calculatctl (and cached) for the problem P will still bc correct, because WC hnvc not climinatcd any unique solutions by our pruning. nut consider what number will bc calculated for the subproblcm Q. If both of the solutions to R contribute to unique solutions for Q. then only two of the so!lltions to the problem Q will be found, r‘lthcr than all five. l‘his number is wrong, and if cached, will cause difficulties when the problem Q is cncountcrcd again, either in vacua, or in the context of some other problem. 375 As a specific example consider two simple facts about f<lmilial relationships: CHILD(x,y) & MALE(x) = SON(x,y) CHILD(x,y) & FEMALE(x) Q DAlJGHTER(x,y), together with the simple data base: (4-l) SoN(STACEY,MARTHA) SON(BROOKS,MARTHA) DAUGHTER(JAN,MARTHA) Suppose that the query (4-2) J%Iddx Such IhalSON(x,MARTHA) is posed to the system. After discovering the two answers immediately in the data base (BROOKS and STACEY), the subproblcm CHILD(x,MARTtlA) & MALE(x) is generated, which generates the subproblem (4-3) CtlILD(x,MARTHA). (4-4) 13ackward reasoning on the second of the axioms products one nnswel (JAN) for this subproblem. The first axiom is also applicable to this subproblem, and would produce two additional answers to the subproblcm. Ijut bccnusc of the fact Ihat chc primary objective is to find Martha’s sons, and the first axiom has just 1~11 applied iu one direction, thcrc is absolutely no point in turning around ancl applying it in the opposite direction. lt won’t lead to any new solutions. As a result. that line of infcrcncc can bc discarded, and only one solution is produced for the subproblcm of finding Martha’s children. SON(x,l~lA;irHA) I CHILD(x,MARTHA) & MALE(x) I CHILD(x.MARTHA) / \ DAUGIITER(x,MARTHA) SON(x,MARTHA) Figure 4-2: Subproblcm Tree for Kinship Problem Ordinarily, this would cause no harm. WC still get .A1 of the answers to the original query, and if an:/ intrrmcdintc results arc c‘rchcd they will still bc correct, but perhaps incomplclc. Cousiti;r, howcvcr, whnt happens if WC blindly apply the methods of section 3.2.2. WC would end up caching the following two mcta-level facts: CAftD(MARTHAS-SONS) = 2 (4-5) CARD(MARTHAS-CHILDREN) = 1. The first is correct, but the second is wrong. Martha has three children, not one. Why did this happen ? Recausc a portion of the subproblcm was pruned due to consiclcration of something outside the context of the subproblem, namely. the overall problem being solved. A solution to this difficulty is to introduce an additional argument in the NUMBEROF relation. If any contextual information is used to “help” solve this subproblcm then it must bc recorded in the additional argument. This contextual information is, in effect, represented by the j~.@cnriurz of the mcta-level conclusion that “this portion of the subproblcm can be eliminated”. The details of this justification are dependent upon the specific structure of the problem solver involved (a meta-level decision to prune must involve some reasoning about the problem solving process), as well as the particular tcchniquc of mcta- lcvcl pruning. As a practical matter, it is possible to simply USC a T for the context argument in those cases w bcrc external pruGr/g has taken place. This would mean that thcsc cached mcta-level statements (ones with T's) arc not uscfi11 for estimating the number of solutions when the subproblcm is cncountcrcd again. Thus, their only purpose is to record the dcpcndcncy tree for use in truth maintcnancc. ‘I’his is probably not a large loss, since, cvcn if full context arguments WCI’L‘ kept, it i, unllkcly that a context argument bvoutd match up with any other except chat of the original problem, whobc results would bc cacllcd anyway. Finally, two things should bc noted. First, the cachctl information for the overall probtcm will bc correct in my case. Second, the extra argument to NUMBEROF statements will slw~ys bc null if rxte~md g,nrr~i~rg is cntircly absent during the course of solving a subproblem. In this case, Ihc proposition about the number of solutions would bc the same as if the subproblem had been solved in VXUO. 5. R@sults l’hc methods described in this paper have been imp!cmcntcd in expcrimcntnl versions of the MRS system (Gcncscrcth, 1982a). /\n clcgant (but still impractic,rl) itnplcmcntntioli of the problcln solver has also been done as control axioms for Mcta~lcvcl ArAitcctl:rc (Gcncscrcth. 1982b). ‘1‘0 make use of the tcchniqucs dcscribcd, the most stringent rcquirolncnt is d rcprcscnlation sy5tcnl that pc~ mils llic expression of the ncccssary knowledge &out SC! definitions 2nd set sizes. Given this, it is pobsiblc to “shoehorn” tlicsc lll~tilii& ilito lnost existing problem solvers. While WC chose to replcsunt mctn-lcvcl prOpOSiiiOllS (e.g. NUMBEROF StXClTIcntS alltl jirstificatioiis) explicitly in this paper it is not necessary to do so in order to mnkc USC ol' tlrc tcchniqucs described. Our initial cxpcricnces with thcsc methods has been quite positive. For Kunz’ model of renal physiology (Kunz. 1983), knowlcdgc about the sizes of important sets we; provided along with several siatcmcnts of invariance. While invariant set +.cs wcrc ca&cd, the more cl&orate storing of justifications outlined in section 3.2.2 was not complete at the time, and no other caching was done. Typical queries of the model usually require on the order of fifteen miniltcs of CPU time on a DF,C 20/60. The addition of knowledge about set sizes along with a few invariance statements resulted in speed increases from two to ten times, depending ou the pm titular query. In spite of tbc success, the effort was limited by the inability to cxprcss certain comolcx invariance statcmcnts to 111~ MRS system av,lilAlc at the time. ‘I‘hcsc dcficicncies have since been rcm<dicd, mltking ,ldditional specdup possible using only these clcmentary techniques. A second application of the techniques prcscntcd hcrc arose in the development of a simple income tax consultant (13arncs, 1982). During the consultation, the system needs to make inferences about f,lmilial rc!ztionships. lkcmsc of the many different familial relationships involved (e.g. Mother, Father, P,unt, U~rclz, Sister, Hrothcr, I)aughter, Son, Sibling, Child, Parent), tbc search space for cvcn the simplest queries was cxtrcmcly large. For cxalnplc, a simple query to determine all of the siblings of a given person gcncratcd hundreds oi‘ .>abgoals (fifteen pngcs of hardcopy tract) and took nearly fifteen minutes of real time to piw.iucc tlic mswcrs (C’ir-ciil,!r reasoning was primed <is in the example of section 4). ‘I‘hc addition of a priori knowlcdgc nbollt the number of pnlYlltS a pCrSOn hlS, alld ktlOWk~l~C that MOTHER ,l11(1 FA fHER arc function symbols, rcducccl the number of subgoals by a Erctor of three, and rcduccd the 1 un time by a similar factor. IJsc of the full tcchniqucs dc:,cribcd in scclions 3.2.2 and 4 rcsultcd in an ntldltional factor of two rctlirction when runniilg the problctn for the first time. ‘l‘hc reason for this spccdup is th,lt the smc subgoals wcrc often gcncmtctl scvctal tinlcs by diffcrcnt branches of the search tree. ‘i’hc first occur:‘cncc would causz the ~~n~swcr’s ;III~ mct&knowlcdgc to bc c;lcllcd. Subscqucnt wcwrcllccs of the ~ubproblcm could then make USC oF the c3chcd knowledge about set size. 376 6. Discussion 6.1. The C!oseti World Assumption Obscrv,uit rc;~Jcrs will note that wc have rclicd on a closccl-world a5suniption thro~~~,hout this pnpcr. WC hnvc tacitly asslnncd that a problem solver is c,lpablc of producing cvcry solution to a problem, and thcrcforc, tllat the thcorctical nllmbcr of solutions to a problem is the silmc as the number thal the system can product. AlthouC,h convcnicnt, this ,I:,suniption is not a ncccssary one’. ‘i‘hc thcorclic,ll :liimbcr of solutions is an upper bound on the actual ntitnbcr tfiat ;I pl(~bl~111 sotvcr can ~:v.!~cc (,lsc,ulning sound logic i~ld a correct d:ltnbasc) and remains useful in any cnsc whcrc thz sysrcm c:;~n produce all of the answers to a problem. Gchcd information, howcvcr, is infi,rmntion about the actual number of answers a problem solver can product. ‘1‘0 distinguish these two concepts, it is suffcicnt to USC a diffcrcnt relation nnmc, say ACTUALNUMEEROF, supply the fact that This rcscarch is ii small portion of a much larger effort to dcmonstratc that mctn-level rc&oning is an csscntial component in building intclligcnt artifacts, as well as a practical methodology for construction of cxpcrt syslcm5. Mctn-lcvcl reasoning is not a single tcchniquc that can bc knocked off and buried. Rather, it is an cntirc paradigm. ‘I’hcrc are, at the very Icast, hundreds of mcta-level problems like the one WC have described hcrc. Each one has its own special set of concepts (vocabulary) ,mtl governing laws which must bc discovered and formnli/.cd for use in intclligcnt systems. lvl~iy authors hnvc argued for the USC of mcta-lcvcl rcnsoning. More rcccntly several authors have explored general flnmcworks in which systematic mcta-lc\cl reasoning is possible (Doyle, 1930, Gcncscrcth, 1982b, Ilaycs, 1973, Smith, 1952, Wcyhrauch, 1981)). I.ikc that of this paper, thcrc have also been a few attempts to codify the necessary mct‘~-knowlcdgc for solving specific mcta-lcvcl problems. Among the most notable efforts arc those of Ijundy (Dundy, 1979), Clnnccy (Clanccy, 1981), Davis (Davis, l%O), and Wilcnsky (Wilcnsky, 1981). We regard thcsc efforts (as well as our own) as mcrc “drops in a bucket”. F,normous opportunity remains for signific‘mt research into any one of the myriad of outstanding mctn-lcvcl reasoning problems. ACTUALNUMBEROf(v,p) < NUMBEROF(v.p) (6-1) and cllangc our problem solver to use the actual number for cutting off in fcrcnce. In systems whcrc the closed world assumption is not valid there is additional advantage to having both kinds of information available. Information about the actual number of solutions pcfmits inference to be stopped, while thcorctical information allows a system to warn the user when it cannot product all of the answers to a problem. As a result of implicit closed-world assumptions, cxpcrt systems often gcncrate ludicrous answers to questions outside their area of cxpcrtisc. Having both kinds of information available allows a problem solver to determine when it can and cannot solve problems, which would eliminate such errant behavior. ‘ihanks to John Ku117 for an lntercstmg and motivatmg appliwtion Also thank.\ !o Pat Ilaycs for his rcmforccmcnt and evcltcmcnt Jan Clayton and Mlkc Gcneselcth piovidcd useful commcntc on the content and orgam/ation of this paper. IIISl+Xl~N ‘IS Rdrncs, I‘.. Joyce, II , ‘l‘sujl, S A SIVI~IP 1~c~rnc Project Memo. Stanfold 1Jmvcrsity, 1952. 7hr I Piogramming Bandy, A , Byrd, I , 1 ugcr, G , Mclish, C , Palmer, M Solving Mc~~lrtrnicr f’rob/ems Uting ~Vr&lcvcl Info(~c,‘. pages 1017-1027 lntcrnational Joint Confcrcncc on Artificial 6.2. Addi!>g Other Meta-Level Reasoning RccogniAng when all the solutions to a problem hnvc been found is only one of the many possible methods for pruning a starch space. Other methods include: restricting facts to bc used only in forward or backward infcrencc, eliminating rcpcntcd subgoals in an inference, and - _ Intelligcncc, August, 1979. Clnnccy, W. J., I ctqingcr, R. NEOAIYCIN. RctonJ$urng a Rub-buwcf L'.r,77frl 5’) \tcm fir ,Qpl/c0rron lo ;Y~&rirrg Ilcuri>tic Plogiammmg Piqect Memo IIPP 81-2, Stanford Univclsity, I.clxuary 1981. avcjiding “undoing” dn infcrcncc that has just been done. Aside from tic interaction mentioned in section 4, these methods arc all csscntially indcpcndcnt. ‘1 hcrc is ntithing to prcvcnt them from being used in conjlinction. Mcta-lcvcl ordering of :,carch spaces actually has a sync&tic cffcct with the rcchniaucs plcscnt?d hcrc. Orclinnrily, when a system is asked to find all of th\: solutions to ;I problem, ordering of the search space is of no bcnr’fit. ‘I‘hc cntilc space must bc searched anyway. However, when the number of solutions is known, ordering of the search spncc bccomcs valuab!c. If all of the answers can bc collcctcd rapidly (by starching promising branches of the space first), more of the space jqill bc pnmcd by a recognition that all of the solutions hnvc been folmd. Con:*c~~ly, the utility of knowing the number of solutions dcpellds !)n discovering all of the solutions bcTc~~c~ the space is cxhnustcd. FOI pt oblcms in which all the solutions ~“2 desired o~lcri~g ir of m IISP I)OYIC, I A Truth hlaintcnancc Sqstcm Art&k1 lntc/lrgmsr, 1973, 12, 231-272. Doyle. J A Alodd jbr I)cObrr7~rion, Arliorr. ond I,zffosppecrio/r Ailificial lntelligcnce Iaboratory Mc~no ,21-l R SSl, .Masiachu~cL\ Institute of’l‘cchnology. May 1380. I?xg:nbnunl. E. A, Gencssrcth, M , Kaplan. S J , Mostow, 1). J Intclligcnt Agcnls I’ropos.ll to I~IC Ikhm Advanced Rcqcalch I’iojccts Agency. Genescreth, M. R. An Int~oorlrrctron to &fR’c jof AI I<xpct~s Ilcull’,tic Programming Project hlcmo IIPP-82-27, Stanford Univcrsit;, Novcmbcl 1982. Gcncsereth. M. R., Smith, D. 1’ AfcmI.eve/ Arcllitccrure Ilcur%tic Progl~m~ming Project Memo I IPP-81-6, Stanford University. Dccembcr 1982. Ilaves. P J (,‘onont,!x~ln~ion nnd Ikdwflon. pages 105-117 C/cchoslovakian ,\cada:ny of ScicnccT. 1973. Kunz, J. lke q/ Arrijicial Intelligtnce and Simple Afatlwmnrics to An&x .Vlo&l. PhD thesis, Stanford Univerclty, 1983 In preparation. I’lry.siological 6.3. Perspective In thiq paper a methodology has been prcscntcd for shutting off rc;r:,oning when all ihc answers to ;I problem ha\c iXCt1 found. ‘1‘0 m;~kc use of this, knowlcdgc about the ci/.cs of rclcvant SC~S mu5t 1)~ provided, or derived by the problem solver. It is quite possible for a system to gather such knowledge dynamically and keep it up to date, although the mechanics of truth maintenance are somcwhnt complex for this case. Smith, B. Rejlccrion and Scnrcmtic~s in a I’roccdwl Innguage Artificial Intelligence Laboratory Memo AI-1 R-272, Massnchuscts Instllutc of Technology, January 1982. Weyhrauch. R. W Prolcgomcna to a Theory of Mechanixd Formal Reasoning Arrijicial Inrell~gcnce, 1980, 13(I), 133-170. Wilensky, 1~. Meta-Planning: Representmg and Urin g Knowledge About Planning in Problem Solving and Natural Language Undcrstandlng Cognlrilie S&xce, 1981, 5(3), 197-234. 377
1983
37
230
THE USE OF QUALITATIVE AND QUANTITATIVE SIMULATIONS Reid G. Simmons The Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 ABSTRACT We describe a technique called imagining which uses a combination of qualitative and quantitative simulation techniques to solve a problem where neither alone would suffice. We illustrate the imagining technique using the domain of geologic interpretation and argue for why the two types of simulation are necessary for problems of this sort. We also discuss the strengths of each simulation technique and how they support each other in the problem solving process. I INTRODUCTION In solving problems, one often encounters tasks of the sort: “given an initial state, a final state and a possible solution, which is a sequence of events, verify that doing the sequence of events will achieve the goal state”. For example, one might want to know whether a proposed sequence of chemical reactions can be used to synthesize a given drug. Essentially, this is the “test” phase of the generate and test method [6]. One common technique for testing the validity of a sequence is to simulate each event in the sequence and to check whether the result of the simulation matches the goal state. This paper investigates using a combination of qualitative and quantitative simulation techniques, which we call imagining, in order to test the validity of a sequence of events in domains where neither technique alone would suffice. Imagining is useful for domains in which the goal state is given in quantitative terms but the solution sequence is stated in qualitative terms. The next section presents our test domain of geologic interpretation and shows how simulation helps solve the problem. Section 3 describes the technique of imagining in more detail. In Section 4, we argue for using a combination of qualitative and quantitative simulations, present the strengths of each and discuss how they support one another. II GEOLOGIC INTERPRETATION The domain used as an example throughout this paper is a problem in geology known as geologic interpretation [8]. In the geologic interpretation problem, we are given a diagram which represents a cross-section of the Earth (Figure la). The task is to infer a sequence of geologic events which plausibly explains how that region came into existence. Figure lb shows a solution to the cross-section in Figure la. This work was supported in part by a Graduate Fellowship from the National Science Foundation. In [9], we describe a method for generating and testing solutions for the geologic interpretation problem. In this paper, we explore the testing phase in more detail, assuming that solutions of the form shown in Figure lb have already been generated. The method used to test solutions is to simulate the events in the solution sequence and then to match the final result with the goal state, that is, the diagram cross-section. Since our aim is to match the goal diagram with the result of the simulation, it is clearly useful for the simulation to produce a diagram as its result. To accomplish this, a geologic event is simulated by constructing a diagram which represents the spatial effects of that event. For example, simulating the sequence presented in Figure lb produces the sequence of diagrams in Figure 2. (The initial state, not shown, is a blank diagram, representing the existence of only “bedrock”). Since the final diagram (Figure 2-7) matches the goal diagram (Figure la), we can conclude that the solution (Figure lb) is a valid explanation for how the region came into existence. Ill IMAGINING imagining is the combined use of qualitative and quantitative simulation techniques to simulate a sequence of events. The diagrams in Figure 2 were produced by doing imagining on the sequence of Figure lb. In this section, we show how both quantitative and qualitative simulations are used in testing a solution sequence of geologic events. Fig. 1. Typical Geologic Interpretation Problem and Solution 1. Deposit Sandstone 2. Deposit Shale b. 3. Uplift 4. Intrude Mafic-Igneous 5. Tilt 6. Fault Across Shale and Sandstone 7. Erode Shale and Mafic-Igneous From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Fig. 2. Simulation of Sequence in Figure 1 b 2. Deposit Shale 1. Deposit Sandstone 4. Intrude Mafic-Igneous 5. Tilt / --- --- A -c--- --- -_ _--- ~~~~.~:~~~ -- - Y3 I.............. ~: 7. Erode The technique of constructing a sequence of diagrams is obviously a quantitative simulation technique because diagrams are metric in nature. For example, to simulate deposition we draw a line in the diagram at the level of the top of the deposition (see Figure 2-l). In order to actually draw the line, we must choose an exact equation for that line. Since, in our geologic model, the top of deposition is horizontal, the line drawn is completely parameterized by the height of the deposition which is being simulated. For each geologic process we have defined process parameters, like the height of deposition, for which numeric values must be chosen in order to construct a diagram that represents the effects of the process. Figure 3 shows the solution of Figure 1 b augmented with the numeric parameter values actually used to produce the diagrams of Figure 2. However, where do the values for these parameters come from? An obvious place to obtain the numeric parameter values is Fig. 3. Solution to Figure la with Parameter Values 1. Deposit Sandstone to a depth of 450 meters below sea-level. 2. Deposit Shale to a depth of 30 meters below sea-level. 3. Uplift by 200 meters. 4. Intrude 80 meters of Mafic-Igneous at an angle of 78’. 5. Tilt by -16’. 6. Fault at an angle of 108’ Across Shale and Sandstone, with a slip of 76 meters. 7. Erode Shale and Mafic-Igneous to sea-level. to measure them in the goal diagram. For example, the angle of the mafic-igneous intrusion or the thickness of the shale is measurable in the diagram (see Figure 4). But these measurements represent the final values for the parameters -- they might have changed over time due to earlier events, In order to determine the original parameter value we must “correct” for subsequent changes to that parameter, which requires reasoning about the accumulated effects of processes on a parameter. For example, we need to determine the angle of intrusion of the mafic-igneous (Figure 3, Step 4). Measuring in the goal diagram (Figure 4), we find that the angle is 62’. However, since we know that the intrusion shown in Figure 4 was earlier rotated by -16’ due to the tilt (Figure 3, Step 5) we can infer that the actual angle at the time of intrusion was 78’. We use 78’ as the parameter value at the time of intrusion so that the later tilt rotates the intrusion to match the angle in the goal diagram. Likewise, the thickness of the shale is measured in the goal diagram as 500 meters, but since we know that some was eroded, the initial amount of deposition must have been greater than 500 meters. In order to do such reasoning, we need to determine the cumulative changes to the process parameters. This is where the qualitative simulation comes in to play. We can determine the cumulative effects by qualitatively simulating the solution sequence, that is, by symbolically representing the changes that occur to parameters as a result of each event in the sequence. In our case, such changes are represented as symbolic “change equations” which relate, in terms of other parameters, the values of the parameters before and after each event. For example, the changes to the height of the top of the sandstone formation (see Fig. 4. Measuring from the Diagram Angle of Mafic-Igneous Intrusion Top of - Sandstone n n Thickness of Shale 365 Figures 3 and 4) are given as: height of sandstone top after deposition = height of sandstone top before uplift height of sandstone top after uplift = height of sandstone top before uplift + uplift-amount height of sandstone top after uplift = height of sandstone top before tilt height of sandstone top after tilt = cos(theta) l height of sandstone top before tilt - sin( theta) * lateral of sandstone top before tilt* height of sandstone top after tilt = height of sandstone top in Figure 4 Now, to determine the height of the top of the sandstone after deposition, we algebraically manipulate the above equations to obtain: height of sandstone top after deposition = [(height of sandstone top in Figure 4 + sin(theta) * lateral of sandstone top before tilt) + cos( theta)] - uplift-amount. We now measure the height of the top of the sandstone in Figure 4, and recursively use this technique of measurement and correction to determine the values of the other parameters in the equation (i.e., uplift-amount, lateral of sandstone top and theta, the angle of tilt). The technique outlined above is the essence of imagining. First, a qualitative simulation is done to establish the cumulative changes to parameters. Second, the results of the simulation (the change equations) and the goal state (the goal diagram) are used to work backwards to infer numeric values for the process parameters. Third, a quantitative simulation is carried out, which in our case constructs a sequence of diagrams, and the final result is matched with the goal state. IV DISCUSSION In this section, we argue for the necessity of two kinds of simulation in problems of this sort, discuss the strengths of each and show how they support one another. A. Whv Two Simulations? Obviously, doing two simulations is more work than doing one. So why do both? We claim that either alone is inadequate in this domain. In the previous section we saw that in order to perform the quantitative simulation with parameter values which approximate the values actually used in forming the region, we need to use the symbolic “change equations” obtained from the qualitative simulation to “correct” the values measured in the goal diagram. “Corrected” parameter values are necessary because the process of matching the goal and simulated diagrams is sensitive to the values used. In general, we cannot tell the difference * The “lateral” of a point corresponds to the X coordinate of the point in the diagram. Fig. 5. Simulation Using Uncorrected Parameter Values ---d a. Simulated Diagram between a mismatch resulting from an incorrect solution sequence and one resulting from the simulation of a valid solution using uncorrected parameter values. For example, Figure 5a shows a simulation done with parameter values that were measured in the goal diagram (Figure 5b) but not corrected. In comparing Figure 5a with Figure 5b, it is not clear whether the differences arise because the solution sequence is invalid or because the parameter values were badly chosen. Thus, the qualitative simulation is needed to enable us to correct the parameter values used by the quantitative simulation. Now, let us consider why a qualitative simulation alone will not suffice. One problem is that certain geologic features that help determine a successful match, such as the shape of formations, are difficult to express in qualitative terms. A second problem is that qualitative models are ambiguous, in that a single qualitative representation maps to many real-world situations. For example, if the tilt of a formation is specified qualitatively, such as “tilting clockwise”, there is a wide range of actual tilts which match that description. This ambiguity makes the matching problem very difficult. As in other research dealing with qualitative representations (e.g. [2], [3], [4]), we have found that it is necessary to used quantitative knowledge to reduce or eliminate the ambiguities. The source of both these difficulties is that qualitative representations abstract certain kinds of information, like shape or degree of tilt, and these are precisely the kinds of information needed to determine a successful match. In fact, due to the lack of adequate shape and spatial descriptions, simulating an invalid solution sequence may yield the same qualitative result as simulating a valid sequence. Thus, the result of a purely qualitative simulation is totally inadequate for doing the types of matching needed to test solutions for the geologic interpretation problem. We need a quantitative representation of space, which in our case is obtained by constructing diagrams, to perform the matching adequately. V STRENGTHS OF THE TWO SIMULATION TECHNIQUES In addition to the fact that each type of simulation is necessary in order to do imagining, both techniques have certain strengths which complement the weaknesses inherent in the other. For our purposes, the major strength of quantitative 366 simulation is that it produces precise, unambiguous results. It is a fairly simple matter to check for a successful match between the goal diagram and the result of the quantitative simulation. As we have seen in the previous section, the qualitative representation is too abstract to determine a match unambiguously. On the other hand, the abstract nature of the qualitative simulation makes it ideally suited for its task in the imagining technique, which is to enable us to reason about sequences of changes to parameters. Since a qualitative simulation can be performed with much less information than that needed for a quantitative simulation, the qualitative simulation in effect forms a skeleton of the effects of the geologic events and the quantitative simulation allows us to flesh it out. The major strength of qualitative simulation is that changes are explicitly represented. This facilitates reasoning about what changes occur to parameters. For example, it is much more informative to describe the change to the angle of intrusion of the mafic-igneous as angle of mafic-igneous in Figure 4 = angle of mafic-igneous after intrusion + angle of tilt rather than taking measurements from diagrams and inferring that 62O = 78O-t -16’. In fact, composing and manipulating such symbolic equations (see Section Ill) is precisety what enables us to “correct” the values of the parameters. A. Process Descriptions The different uses of the two simulations place strong constraints on how processes are best described. Since it is the end result of the quantitative simulation which is useful, we are concerned primarily that the quantitative process descriptions be computationally simple and robust, that is, they produce accurate simulations over a wide range of inputs. The qualitative simulation is used primarily to accumulate the changes to parameters, and so the process descriptions should explicitly represent the effects of the processes. These constraints have led us to develop quantitative process descriptions which are algorithmic or “do this” descriptions, whereas qualitative process descriptions are assertional or “what happens” descriptions. Figure 6 illustrates the flavor of our quantitative and qualitative descriptions of deposition.* Notice how concisely we can describe deposition algorithmically. In general, it is much easier to come up with an algorithmic description of a process than to find an assertional description. This is one reason why quantitative simulation techniques have been used for a long time (e.g SIMULA [I]). Also, there are usually efficient methods for “running” these algorithms on quantitative representations, thus effecting the simulation. Recently much work has been done on qualitative representations and simulation (e.g. [2], [3], [4], [5], [7]; this work owes much to these pioneering efforts). That research, as well as our own, has pointed out the difficulty of formulating a qualitative description of a process which completely captures the effects of the process. For example, we have been unable to come up with an adequate qualitative description of shape and, l &e [lo] for a more complete description. Fig. 6. Quantitative and Qualitative Descriptions of Deposition A. QUANTITATIVE ALGORITHM FOR DOING DEPOSITION 1. Find the lowest end-point of all the edges that represent the surface of the Earth. 2. Draw a horizontal line “DLEVEL” above that. 3. Erase all parts of the line that cut across a face corresponding to a rock-unit. 4. All other newly created faces below the line are part of the newly created sedimentary rock-unit. B. QUALITATIVE PROCESS DESCRIPTION OF THE CHANGES 1. 2. 3. 4. 5. 6. 7. 8. DUE TO DEPOSITION The thickness of the deposit is “DLEVEL minus the height of the bottom of the Earth’s surface at the start of deposition”. The orientation of the deposit is zero. There is a boundary between the deposit and all formations which are on the Earth’s surface at the start of deposition and whose bottoms are below DLEVEL. The composition of the deposit is DCOMPOSITION. The bottom of the deposit is the same as the bottom of the Earth’s surface at the start of deposition. The top of the deposit is the same as the bottom of the Earth’s surface at the end of deposition. The height of the bottom of the Earth’s surface at the end of deposition is DLEVEL. The height of the top of the deposit at the end of deposition is less than sea-level. 367 quantitative inaccuracy in the simulated diagram. In particular, the mafic-igneous intrusion is lower than in the goal diagram (Figure 7b) because its displacement due to the fault sliding was not accounted for when correcting the parameter value for the height of the mafic-igneous. Although it is not trivial to determine the source of the error from the difference between the two diagrams, the comparison does provide an indication of which parameter is inaccurate. By checking the sequence of changes for that parameter against our own geologic knowledge of what was supposed to happen, we can usually pinpoint which process description is incomplete and in what ways. For example, we can determine that the process description of faulting is the one which is incomplete by noting that the diagram associated with the faulting is the first one in the sequence of simulated diagrams in which the actual height of the mafic-igneous differs from the height predicted by the imaginer. This pinpoints that the faulting process is incomplete and that the error has something to do with changing the height of existing rocks. From our knowledge of geology we would realize that the missing change is that the rocks on one side of the fault slide downwards. By applying this methodology over several geologic interpretation examples, our models and understanding of the geologic processes have become greatly refined. Notice that in using the system to test the validity of sequences, the qualitative simulation is needed in order to perform the quantitative simulation accurately. However, in developing the system, the quantitative simulation supports the qualitative by enabling us to “see” the bugs in our qualitative process descriptions. C. Multiole Reoresentations and Other Issues Many of the ideas expressed in this paper are treated in more detail in [lo]. In that paper, particular attention is paid to the multiple representations needed to support the qualitative and quantitative simulations, how these representations are used and how they interact. One interesting result discussed there concerns process parameters which are not measurable in the diagram, such as the amount of uplift or the amount of erosion. It is shown that for those parameters any arbitrary value may be chosen without affecting the final diagram produced by the quantitative simulation. Fig. 7. Simulation Using Incomplete Process Description a. Simulated Diagram We believe that imagining could prove useful in other domains as well. For example, an economist might test a theory of how the economy reached its current (quantitative) state by seeing if the results predicted by the theory match the current economic state. Other domains involve scientific experimentation in areas such as biology or chemistry, where experiments are run, data are collected and a sequence of events is proposed to explain the data. The scientist might then use imagining to test whether the proposed sequence would result in the same test data. VI CONCLUSIONS We have developed a new technique, called imagining, which uses a combination of qualitative and quantitative simulations to solve problems in domains where neither technique alone is adequate. The basic technique of imagining involves three steps. First, a qualitative simulation is performed to establish the sequence of changes to process parameters. Second, the result of the simulation, represented as “change equations”, and the goal state are used to work backwards to infer numeric values for the process parameters. Third, a quantitative simulation is carried out and the final result is matched with the goal state. We believe that imagining will prove useful in testing the validity of a sequence of events in domains where the goal state is given in quantitative terms and the sequence of events is given in qualitative terms. VII ACKNOWLEDGEMENTS I would like to thank Randy Davis for his guidance and supervision, and Ken Forbus and Karen Wieckert for their valuable suggestions and comments. REFERENCES [l] Dahl, Ole-Johan; Myhrhaug, Bjorn; Nygaard, Kristen - SIMULA 67 Common Base Language, Norwegian Computing Center, pub. S-22, 1971. [2] DeKleer, Johan - “Qualitative and Quantitative Knowledge in Classical Mechanics”, MIT AI-TR-352, December 1975. [3] DeKleer, Johan - “Causal and Teleological Reasoning in Circuit Recognition”, MIT AI-TR-529, September 1979. [4] Forbus, Kenneth D - “A Study of Qualitative and Geometric Knowledge in Reasoning about Motion”, MIT-AI-TR-615, February 1981. [5] Forbus, Kenneth D - “Qualitative Process Theory”, MIT-AIM-664, February 1982. [6] Newell, Allen - “Artificial Intelligence and the Concept of Mind”, in Computer Models of Language and Thought, eds. Schank & Colby, 1973. [7] Rieger, Chuck; Grinberg, Milt - “The Declarative Representation and Procedural Simulation of Causality in Physical Mechanisms”, lJCAl6, 1979, p. 250. [8] Shelton, John - Geology Illustrated, Freeman and Co, chapter 21,1966. [9] Simmons, Reid G - “Spatial and Temporal Reasoning in Geologic Map Interpretation”, Proceedings of AAA/-82, August 1982, Pittsburgh, PA. [lo] Simmons, Reid G; Davis, Randall - “Representations for Reasoning About Change”, MIT AIM-702, April 1983. 368
1983
38
231
DEFAULT REASONING AS LIKELIHOOD REASONING Elaine Rich Department of Computer Sciences The University of Texas at Austin Abstract Several attempts to define formal logic5 for some type of default reasoning have been made. All of these logics share the property that in any given state, a proposition p is either held to be true, it is held to be false, or no belief about it is held. But, if we ask what default reasoning really is, we see that it is form of likelihood reasoning. The goal of this paper is to show that if default reasoning is treated as likelihood reasoning (similar to that of Mycin), then natural solutions emerge for several of the problems that are encountered when default reasoning is used. This is shown by presenting 7 such problems and showing how they are solved. I. Introduction The need for default reasoning in artificial intelligence systems is now well understood 191. Such reasoning is required to enable systems to deal effectively with incomplete information. Several attempts have been made in the literature to define a formal logic for some type of default reasoning 15, 4, 31. All of these logic5 share the property that in any given state, a proposition p is either held to be true, it is held to be false, or no belief about it is held. No intermediate situation is possible. But if we ask what default reasoning really is, we see that it, is a form of likelihood reasoning. Most birds fly, so it is likely that a particular bird does so. Most people have the same hometown as their spouse, so it is likely that Mary does. In many domains, negative Pacts about which we have no information are true, so if we do not know whether there is a flight from Vancouver to Oshkosh, it is likely that there is not. Once we recognize that default reasoning is a form of likelihood reasoning, it, is clear that there are more than two truth values that can be associated with a proposition. Different truth values represent differing levels of belief (or confidences in) the truth of the associated proposition. This corresponds to the situation in a variety of expert systems such as Mycin [S]. We will adopt the likelihood reasoning system used in Mycin. If a proposition is known definitively to be true, then it is held to be true with a certainty factor (CF) of 1. If it is known definitively to be false, it is known with a CF of -1. If there is equal evidence for and against a proposition (including the situation in which there is no evidence either way) then the proposition has an associated CF of 0. A CF in between these values indicates thaf the evidence either supports or disconfirms the associated proposition but it does so with some doubt,. The goal of this paper is to show that if default reasoning is treated as likelihood reasoning, then natural solutions emerge for several of the problems that are encountered when default reasoning is used. In the rest. of this paper, we will present 7 such problems and show how they can be solved through the use of certainty factors. II. Defhltions Before doing this, however, we must introduce some notation and define a likelihood reasoning system. A standard (nonlikelihood) default rule consists of three parts: the premises, an UNLESS clause 171, and a conclusion. For example, the default rule .Most birds fly a, can be represented as? bird(x) =+ fly(x) UNLESS -fly(x) (1) In q normalo default rules, the UNLESS clause is exactly the negation of the conclusion. We will restrict our attention to normal default rules for two reasons. They are adequate to describe naturally occurring defaults. And they form a more tractable logical and computational system than do nonnormal rules [S]. Given this assumption, UNLESS clauses need not explicitly be represented. All we need do is to distinguish a default rule from a standard rule. We can do that by using * as an additional premise. Thus the default rule above will be represented as bird(x) A * + fly(x) (2) which can be read as, ‘If bird(x) and not fly(x) is not provable, then conclude fly(~)~. Now we extend this representation to associate with each default rule a certaintly factor (CF), which states the confidence that the system should have in the conclusion if each of the premises is known (with CF=l). So the example rule becomes bird(x) A * * fly(x) (CF=.9) (3) If bird(Tweety) is known with a CF of 1, then, using this rule, fly(Tweety) can be asserted with CF=.9. Notice that the meaning of a CF attached to a universally quantified formula represents the certainty that the statement holds for any particular assignment of values to the bound variables. The certainty that the statement holds for all such bindings is usually -1, since the existence of at least one counterexample is generally known. If the CF of bird(Tweety) is less than 1, that uncertainty must coritribute td the fihtil’CF o? fly(Tweety).‘..This ti done ‘using the formula: Given an implication P, A P, A . . . A P, =+ C (CF=k) (4) Then the CF attached to C will be k X TrCF(Pi) So if bird(Tweety) has a CF of .9, then fly(Tweety) will be asserted with a CF of .9X .9=.8. In any likelihood theory, contradictions will arise. There are two ways that contradictions can be handled. One is to assert, the more likely proposition and to modify its CF to account for the negative evidence (by subtracting it out as is done in Mycin). The other is to exploit other information about the reasoning process to resolve the conflict and to compute an appropriate CF for the resulting assertion. As we will see, there are times when each of these is appropriate. III. Statlstlcal, ProtypIcal, and DefInitional Facts The distinction between definitional facets and prototypical ones is often made (e.g., [lo]). A n additional distinction is sometimes 1 Throughout this paper, all variables are assumed to be univemallY quantified SO the quantifiers will be omitted. From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. made 16) between prototypical and statistical facts. Consider the facts represented by the following sentences: People are mammals. (5) person(x) 4 mammal(x) Most people can see. (6) person(x) A * =a sees(x) Most Southerners are Democrats. (7) Southerner(x) A * =# Democrat(x) (5) is a definition. It can never fail to hold. (6) is a prototypical statement. It should be assumed to hold in the absence of evidence to the contrary. (7) is a statistical statement. It, too, is a reasonable default. But it seems somehow less necessarily true than (6). The distinction among these three kinds of facts can easily be captured using likelihoods. Definitional facts have a CF of exactly 1. So do analytical facts, which can be derived from a set. of definitional facts. All other facts represent empirical observations, very few of which have no exceptions. Prototypical facts typically have fewer exceptions than do statistical ones; thus they tend to have higher CFs. Any fact with a CF of other than 1 or -1 is a default rule. It suggests something to be believed unless there is contradictory evidence. (5)-(7) can be represented in a likelihood framework as follows: person(x) * mammal(x) (CF=l) (8) person(x) * sees(x) (CF=.99) (9) Southerner(x) + Democrat(x) (CF=.8) (10) By representing CFs that differ from 1, we not only eliminate distinctions that are very difficult to make, but also allow more information about assertions to be represented in the knowledge base. Notice that we can now eliminate the explicit * part of the premise for default rules since any rule with a CF other than 1 or -1 is known to be a default rule. The implicit UNLESS clause still plays an important role though. Suppose we know -sees( John) (CF=l) (11) Since the truth of this assertion is certain, it will completely dominate all inferences derived from default rules such as (9). This corresponds to the role of the UNLESS clause in standard default logic. On the other hand, if (11) were believed with a CF of only .9, (9) would produce the assertion sees( John) (CF=.99) (12) Now two conflicting facts are believed and a conflict resolution mechanism must be applied to choose between them or perhaps to decide that no conclusion is justified. In standard default logic, the only conflict resolution technique available is time sequencing. The first conclusion to be drawn blocks later conflicting ones, regardless of t,he relative strengt.hs of each af t.he conclusions. Using .CFs instead, however, allows more rational mechanisms to be employed. See Section VII. for a further discussion of this. IV. The Role of Multiple Premise6 Multiple premises can be used to increase the CF of a rule. Consider the rule animal(x) * defensemechanism(x,camoflage) (CF=.4) (13) which can be modified to have an increased CF by the addition of 3 premises: animal(x) A color(x,c) A habitation(x,y) A color(y,c) * defensemechanism(x,camoflage) (CF=.9) (14) Whereas (13) would never appear in a knowledge base without CFs, (14) might. If there are no CFs represented, then there must be a threshold CF value (not normally explicit or consistent) below which statements must simply be thrown out by the knowledge base creator, or the statements must be refined with additional premises until their CFs cross the threshold. By representing CFs explicitly, though, the need for an arbitrary threshold disappears. V. Cascading Belfefs The explicit use of CFs not only increases the information that is directly available from a knowledge base; it also increases the information associated with assertions derived in multiple steps. Consider the following: adult(x) + employed(x) (CF=.9) (15) adult(x) + -dropout(x) (CF=.91) (16) adult(x) A -dropout(x) =# employed(x) (CF=.99) (17) to refine the domain In of (17) an additional premise has been added the assertion and thus to raise its CF. Suppose we know adult(Bil1). Then, using (15) we can conclude employed(Bil1) with CF=.9. We can also derive the same conclusion using (16) and (17) (by multiplying the two CFs as described in Section II.). If, on the other hand, we also know, with CF=l, -dropout(Bill), then, using (17) we can conclude employed(Bil1) with a CF of .99. VI. Applying the Closed World Assumption Examples like the last one arise often if the closed world assumption is used. Consider again the last example, but suppose (16) were missing and we had no knowledge of whether or not Bill was a dropout. Then we might still like to apply (17) to conclude that Bill is employed. This can be done by using the closed world assumption to assert -dropout(Bill). In some restricted domains this can be done without decreasing the CF of the final assertion. In an airline database system, if flightfrom(cityl,city2) is not known, then with CF=l there is not such a flight. But in many other domains, that is not the case. In such domains, the use of the closed world assumption must affect the CF of the resulting assertion. The more knowledge that is available, the more accurately this can be done. A simple thing is to assign as the CF of the assertion -p the quantity (.l-prob(p)). A more accurate answer can be obtained if interactions between p and other relevent predicates are known. This is, in fact, what we did by inserting (16). In addition, other sources of information can be used if they are available. For example, the absence of an assertion p that would be important if it were true more strongly suggests -p than does the absence of an unimportant assertion 111. VII. Conflfcting Beliefs In many knowledge bases that contain default rules, contradictory inferences may arise through the use of more than one default rule. Consider the following example from IS]: Typically Republicans are not pacifiits. Republican(x) + -pacifist(x) (18) Typically Quakers are pacifists. Quaker(x) * pacifist(x) (19) Bill is a Republican and a Quaker. Republican(Bil1) A Quaker(Bil1) (20) There are two ways in which a nonlikelihood logic might deal with this situation. One is that described in 15) in which a default knowledge base may have one or more extensions. Each extension is a consistent set of assertions, but the set of extensions of a given knowledge base may not be consistent. In this example, there are two extensions, one containing the assertion that Bill is a pacifist and one containing the assertion that he is not. A reasoning system built around such a logic would produce one or the other extension nondeterministically. The other nonlikelihood approach is that described in [S], in which the default rules in the knowledge base are modified to handle the interactions between them and to guarantee that, in the case of conflict, no conclusion at all will be drawn. Thus, in place 349 of statements (18) and (19) above we must write: FLY link emerges. Most non-Quaker Republicans are not pacifists. Republican(x) A -Quaker(x) =+ -pacifist(x) (21) Most non-Republican Quakers are pacifists. Quaker(x) A -Republican(x) ==+ pacifist(x) (22) Since the hypotheses of neither of these assertions are satisfied in the case of Bill, no inference will be drawn as to whether Bill is a pacifist. The major drawback to this approach is that, for a large knowledge base, there may be many such rule interactions and the rule writer is forced to find those interactions and to modify each rule so that it mentions explicitly all of the other rules with which it conflicts. It would be more accurate if the reasoning system itself were able to detect these interactions and to decide whether or not a particular conclusion is warranted. A likelihood reasoning scheme can both guarantee a determinsitic result and detect rule interactions automatically. (l8)-(20) are rewritten as Republican(x) * -pacifist(x) (CF=.8) (23) Quaker(x) + pacifist(x) (CF=.99) (24) Now we can assert pacifist(Bil1) (CF=.l9) (25) by simply accepting the more likely belief but modifying its CF to account for the conflicting evidence. We do this by subtracting the CF of the chosen. conflicting assertion from the CF of the assertion that is This approach does not remove all difficulty from this problem. In particular, a fine-tuning of the CFs in the knowledge base may affect the outcome of the reasoning process. But, given a particular knowledge base, the outcome of the reasoning process can be predicted by a static analysis of the knowledge base itself. The reasoning process is not subject to the race conditions that are inherent in the multiple extension approach. Additionally, it is possible to introduce more sophisticated techniques for conflict resolution if that is important. ?CIII. Property Inherftance Across ISA Links A common form of default reasoning is the inheritance of properties in the ISA hierarchies of semantic networks. The reasoning rule that is used is not usually stated as a rule but rather is contained in the code of the network interpreter. This rule is that a node nl can inherit any property p that is true of any other node n2 above nl in an ISA chain unless a contradiction is found at some node that is in the chain and that is below n2 and not below nl. When the knowledge in the network is represented instead in a logical framework, it is more difficult to define a corresponding rule. But the rules of likelihood reasoning we have described can solve this problem easily, with the additional advantage that a confidence measure can be attached to the resulting inference. Figure 1: A Simple Semantic Network Consider the network shown in Figure 1. Using a traditional network inference scheme, the question of whether Sandy can fly would be answered by chaining up the ISA hierarchy starting at the node labeled Sandy and stopping at the first node from which a This same knowledge can be represented as the following set of likelihood formulae: bird(x) =+ fly(x) (CF=.9) (26) ostrich(x) * bird(x) (CF=l) (27) ostrich(x) * -fly(x) (CF=l) (28) ostrich(Sandy) (CF=l) (29) Applying the rules of likelihood reasoning to (26)-(29), we can derive fly(Sandy) (CF=.9) (30) Gly(Sandy) (CF=l) (31) A contradictory set of beliefs has been derived. Just as in the last section, we must choose the belief with the higher CF value. But this time, rather than attaching to that belief a CF that reflects the conflict, we attach the higher value. This is done by examining the reason for the conflict,. observing that the rule about ostriches is a special case of the rule about birds and thus should override the latter, and then concluding -fly(Sandy) with the CF suggested by the special case rule. This provides an example of the process of reasoning to resolve conflicts that was suggested in Section II.. To guarantee that likelihood reasoning will produce the same result that would be produced by chaining through an ISA hierarchy, it is necessary that what we call the monotonic consistency constraint (MCC) hold for the CFs in the system. MCC is defined as follows: If there is a rule pi(x) =+ p2(x) (CF=kl) and a rule p3(x) * p4(x) (CF=k2) and p3(x) + pi(x) by an ISA chain and (p2(x) A p4(x)) is a contradiction, then k2>kl. In other words, properties attached to higher nodes in an ISA chain must have lower CFs than all conflicting properties lower in the chain. This is always possible. For all ISA hierarchies, there exists an assignment of CFs that satisfies MCC. In fact, any assignment of CF values that forms a partial ordering that is isomorphic to the partial ordering formed by the ISA hierarchy is adequate. MCC is consistent with the intuition that more specific rules are more accurate than more general ones. IX. The Failure of the Shorted Path Rule The property inheritance problem is more complex in tangled hierarchies [2] such as the one shown in Figure 2(a). One way of implementing inheritance in such a network is to use the shortest path rule, but as pointed out in [S], this rule does not always work. The rule is sensitive to the insertion and deletion of nodes that should have no effect on inheritance. This is illustrated in Figure 2(b). The problem is that the shortest path rule counts steps in the reasoning chain as an approximation to a certainty level. But this is often a poor approximation. Likelihood reasoning solves this problem by using certainty factors directly. The use of a rule with a CF of 1 has no effect on the CF of the resulting assertion. The use of a rule with some other CF does. The knowledge of Figure 2(a) is represented as: bird(x) =+ -tame(x) (CF=.8) (32) ostrich(x) + bird(x) (CF=l) (33) pet(x) * tame(x) (CF=.95) (34) ostrich(Sandy) (CF=l) (35) pet(Sandy) (CF=l) (36) The knowledge of Figure 2(b) is represented as bird(x) =+ -tame(x) (CF= .8) (37) pet(x) * tame(x) (CF=.95) (38) rarepet =+ pet(x) (CF=l) (39) 350 rarepet(Sandy) (CF=l) (40) bird(Sandy) (CF=I) (41) From both (32) - (36) and (37) - (41) we derive: wtame(Sandy) (CF=.8) (42) tame(Sandy) (CF=.95) (43) We assert (43) since it has the higher CF. The insertion and deletion of rules with CFs of 1 did not affect, the derivation. Figure 2: A Tangled Network X. Concluelon Default reasoning is likelihood reasoning and treating it that way simplifies it. This is important in problem-solving systems, which must manipulate their knowledge, incomplete as it may be, to construct the best possible solution to a problem. Thus, for example, it is important to be able to choose to believe the most likely of a set of conflicting assertions rather than simply to believe nothing. There are, of course, other domains in which this is not true. It may be desirable for a database system, for example, to assert nothing in such a situation and to allow the user of the database to decide what to do next. It is important also to keep in mind that the introduction of likelihoods poses new questions such as how best to assign CFs when there are conflicts. But these questions are central to the process of default reasoning and should be addressed. References 1. Collins, A, E. H. Warnock, N. Aiello & M. Miller. Reasoning from Incomplete Knowledge. In Representation and Understanding, D. G. Bobrow & A. Collins, Eds., Academic Press, New York, 1975. 2. Fahlman, S. E.. NETL: A System for Representing and Using Real-World Knowledge. MIT Press, Cambridge, Mass., 1979. 3. McCarthy, J. ‘Circumscription-A Form of Non-Monotonic Reasoning.. Artijicial Intelligence 15 (Apr. 1980). 4. McDermott, D. & J. Doyle. .Non-Monotonic Logic I.” Artificial Intelligence 13 (Apr. 1980). 5. Reiter, R. “A Logic for Default Reasoning.” Artificial Intelligence 19 (Apr. 1980). 6. Reiter, R. & G. Criscuolo. Some Representational Issues in Default Reasoning. Dept. of Compute Science, The University of British Columbia, 1980. 7. Sandewall, E. An Approach to the Frame Problem and its Implementation. In Machine Intelligence 7, B. Meltzer & D. Michie, Eds., Univ. of Edinburgh Press, 1972. 8. Shortliffe, E. H.. Computer-baaed Medical Consultations: MyiZN. Elsevier, New York, 1976. 8. Winograd, T. “Extended Inference Modes in Reasoning by Computer Systems.@ Artificial Intelligence 13 (1980). 10. Woods, W. A. What’s in a Link: Foundations for Semantic Networks. In Representation and Understanding, D. G. Bobrow & A. Collins, Eds., Academic Press, New York, 1975. 351
1983
39
232
Anne v.d.1,. Gardner Dcpartmcnt of Computer Science Stanford University Stanford. California 94305 The analysis of legal problems is a relatively new domain for AI. This paper outlines a model of legal reasoning, giving special attention to the unique characteristics of the domain, and dcscribcs a program based on the model. Major fcnturcs include (1) distinguishing bctwccn questions the program has enough information to resolve and questions that compctcnt lawyers cor~ld argue either way; (2) using incompletely defined (“open-tcxturcd”) technical concepts; (3) combining the use of knowledge cxprcsscd as rules and knowledge expressed as cxamplcs; and (4) combining the use of professional knowlcdgc and commonsense knowledge. All thcsc features may prove important in other domains besides law, but previous AI research has left them largely unexplored. I INTRODUCTION This paper describes a program for analyzing legal problcms-- specifically, problems about the formation of contracts by offer and acceptance. The work brings togcthcr two arcas of AI usually treated as distinct. One is rcscarch on expert systems (e.g., 13uchanan 1981, Davis 1982, Stcfik ct al. 1982); the other, natural-language understanding and commonscnsc reasoning (e.g., Schank and Abelson 1977: Winograd 1980). The expert-systems area is obviously rclcvant, since a legal analysis program requires substantial professional knowlcdgc. The natural-language aspect is present, in part, because of the particular legal subdomain: in offer-and-acceptance problems, the data to bc interpreted consist mostly of reported dialogue. Thcrc is also a deeper reason for the natural-langulgc aspect of legal analysis. This reason, explained in the r,cxt section, is the ol)~‘n texture of many 1cgA predicates. It applies equally to legal subdomains such as assault and battery (Mcldman 1975), corporate taxation (iVTcCarty, Sridharan, and Sangstcr 1979; McCarty and Sridharan 19X2), and manuf;~cturcrs product liability (Waterman and Pctcrson 19X l), as well as contract law. The program has been implcmcntcd in Maclisp on a I~IICSYS’I’~M-20. Database storage, rctricvnl, and basic infcrcncc capabilities arc provided by the rcprcscntntion language MKS (Gcncscrcth, Grcincr, and Smith 1350). II I)OM/\IN C’~lAI~/\C’l’lzI~IS’l’ICS ANI) 1)E:SICN CONSII)lXA’I’IONS The design of the program is intended to rcflcct lawyers’ own understanding of the nature and urcs of Icgal materials--in other wordc, to accord with a legally plausible conccptunlization of the domain. Some of the distinctive domain features arc the following: 1. IAcgal rules arc used consciously by the expert to provide guidance in the analysis, argumentation, and decision of cases. This fact distinguishes them from the rules used in most expert systems or the rules of a grammar, which seek to describe behavioral regularities of which the expert or native speaker may be unaware. I.cgal reasoning might thus bc classified as a rule-guided activity rather than a rulc- governed activity. 2. As a consequence of(l), the experts can do more with the rules than just follow them. In a field like contracts, where the rules have been dcvclopcd mainly through decisions in individual casts, Inwycrs can argue about the rules thcmsclvcs and can propose rclincmcnts, reformulations, or cvcn newly formulntcd rules to adapt rhc law to a particular case at hand. Sometimes, it is true, the rules may be taken as fixed--either by long acceptance, in a case-law field, or by statute, in a field like taxation. Even with this simplification, lawyers arc free to argue about what counts as following the rules in a particular case. 3. I,awycrs are not mcrcly free to disagree; on hard legal questions they arc cxpcctcd to do so. Unlike other domains of expertise, in which consensus among the cxpcrts is hoped for, the legal system m,lkes institutional provision for expert dis~lgr-ccmcnt--for inst,mce, in the institutions of opposing counsel, dirscnting judicial opinions, and appcllatc review of lower court decisions. 4. ‘I’hc following question then arises: Is there any clags of cases as to which all competent lawyers would reach the same conclusion? This is the problem, rccogniicd but not solved in the lcgnl iitcrnturc, of whether a dividing lint bctwecn hard cases and clear cases can be found (see, e.g., Hart 1958, F’ullcr 1958, M. Moore 1981). 11cspite the lack of a thcorctical solution. most cases arc in fact trcatcd as raising no hard questions of law. (Whether they raise hard questions of fact is another matter.) 5. When hard legal questions do arise, their basis is quite diffcrcnt from the sources of uncertainty usu,~lly dc>cribed in connection with cxpcrt systems. They do not gcncrally involve in>ufficicnt data, for cxamplc, or incomplctc understanding of the workings of some physica! proccs~. Instead, an cspccially important \!)\Ii’cc of hard questions is the open texture of legal I,rctlicatcs--tll,lt is. the inhcrc:lt indctcrminacy of mc,lning in the words by which f,ict sit\lations arc &ssificd illlo 114 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. instnnccs and 121-132). noninstanccs of Icgal concepts (xc r1art 1961, PP. The phcnomcnon of open texture is not limited to law. ‘I’hc term was coined in philosophy and used originally of words like dog and sold in pointing out that ~osl of our empirical concepts arc not dclimitcd in all possible directions (Waism,mn 1945). Rcccnt annlyscs of such natural- kind words, and other sorts of words too, have involved closely rclatcd observations (c.g., Putnam 1975; see generally Schwartz 1977). 6. The final problem is resolving legal questions, hard or easy. How dots the judge carry out this titsk ? How should hc do it? ITaving done it, how should hc justify his results in a written opinion? ‘T11csc qilcstions--often not distinguished from one another--arc central in legal Philosophy. Diffcrcnt writers, all intimately familiar with the judicial process, paint rather diffcrcnt pictures of it (c.g., I*cvi 1949, I~lcwcllyn 1960, [Iart 1961, IIworkin 1977). They agree on this much: in a well-dcvclopcd, rclntivcly stable field of law (like contracts), there arc at least two distinct knowledge sources that must bc brought to bear. T,cgnl rules arc one; and rules exist even in a nonstatutory field (like contracts) whcrc they lack official wording. (For an in flucntial unofficial attempt 10 state the txles of contract law, see Rcstatcmcnt of Contracts, 1932, and Rcst‘ltcmcnt of Contracts, Second, 1981.) Second, thcrc arc decisions in previous cases. Thcrc is no tidy consensus about just how the rules and the prcccdcnts are used together. These domain characteristics dictate the main features of the program. l‘hc overall objcctivc is not a program that “solves” legal problems by producing a single “correct” analysis. Instead, the objcctivc is to cnnblc the program to rccogni/.e the issues a problem raises and to distin;;uish bctwccn those it has enough information to rcsohc and those on which compctcnt human judgments might differ. Toward this cntl, a heuristic distinction bctwccn hard and easy questions is proposed. ‘I’hc distinction in turn draws on ideas about how rules and cxamplcs interact and how their interaction allows for open texture. To provide ;I dcfinitc context for studying legal rcnsoning, the rcscarch USCE matcrinls cl,issically taught by the cast method in law schools and clanically tcstcd by asking the student, given the facts of a new case, to analy~ their I@ conscqucnccs. The spccilic Icgal topic, as already mcntioncd, is the formation of contracts by offer and acccptancc. ‘I’hc topic is a standard mc for first-ycnr law students. A typical CX~illlin~ltiOll qiicstioii is the following: On July I Buyer sent the fi)llowing t&gram to Scllcr: “I iasvc ~ustrmcrs for salt and need carlo,ld immediately. Will you supply carload at $2.40 per cwt?” Scllcr received the telegram the srmc day. On July 12 Scllcr sent Buyer the following telegram, which Buyer received the same day: “Accept your offer carload of salt, immediate shipment, terms cash on delivery.” On July 13 13uycr sent by Air Mail its standard form “Purchase Order” to Scllcr. On the face of the form TZuyer had written that it accepted “Sc11cr’s offer of July 12” and had written “One c,ir!oad” and “$2.40 per cwt.” in the appropriate spaces for quantity and price. Among numerous printed provisions on the reverse of the form was the following: “U~~lcss othcrwisc stated on the face hereof, payment on all purchase orders >hnll not IX due until 30 days following dclivcry.” ‘I’hcrc was no sl,llcmcnt on the face of the form regarding time of payment. I,atcr on July 13 another party offered to sell tjuycr a carload of salt f(Ji' $2.30 per c&t. IIuycr immcdiatcly wired Scllcr: “fgnorc pllrchasc order mailed cnrlicr today; your offer of July 12 rcjcctcd.” ‘I hit t&gram was rcccivcd by Scllcr on the same day (July 13). Scllcr rcccivcd I<uycr’s purchnsc order in the m,lil the following day (July 14). nricfly analp/c each of the items of corrcspondcncc in terms of its 1~11 cffcct, and indicntc what the result will bc in Scllcr’s action against I3uycr for breach ofcontract. Iii automnting the analysis of such questions, the first step is to construct a rcprcscntationnl formalism to which the Iinglish problem statcmcnt can Ix (mnn~~lly) translated. ‘I’hc primary problem hcrc is to create an ontology of the problem domain and to specify the ways its cntitics may combine. Many of the is;ucs drc discussed in 1~. Moore (1981). In the current rcprcscntation, the major domain classes include events (with actS by individuals as a subcl,iss), SLI~C~, physic:ll objects and substances, sy~nbolic ol).jccts (namely, scntcnccc and propositions), nmsu~cs (as of weight and ~olwnc), and times. Acts arc subdivided into ordinary acts (e.g., uttering the rcntcncc “1 offer . ..‘I). speech acts (e.g., declaring, perhaps incffcctually, Ihat an offer by the spcnkcr is being made), and lcgnl acts (e.g., ol’fcring). The classes arc arranged in a gcncr,tli/ation hlcrnrchy. tl,:ch class name is a unary prcdic‘ltc symbol: po\siblc rclationc among entities arc given by binary prcdicntcs corrcspondillg to the slot name? of a fi;unc rcprcscnt,:tion. Formulas using thccc prdicntcc m written in logic:tl ndution, as is rcquircd by the rcprcscntdtion Iaiigtl,lgc MRS. IV THE OU1’YUT Given an encoding of the problem, the program’s task, as indicated carlicr, is not to produce a single solution but rather to identify the important issues. The output is a graph stiucturc similar to a decision tree, displaying the diffcrcnt analyses of the cast that arc possible in light of the issues left open. In the problem quoted above, at least four analysts should be reported: 1. The first t&gram is an offer and the second an acceptance. Hence a contract was formed, which Buyer later repudiated. Seller wins. 2. The first telegram is an offer, but it cxpircd bcforc Seller rcplicd to it clcven days later. Or, with the same net result, the first telegram is only a preliminary inquiry. Rut the second tclcgram is an offer and the purchase order an acceptance. Buyer repudiated the resulting contract. Scllcr wins. 3. As in (2), the second tclcgram i? an offer and the purchase order an acceptance. Isut the final tclcgram opcratcd to revoke the acceptance and reject the offer. So there is no contract, and Seller loses. 4. As in (2), the second telegram is an offer. The purchase order, proposing a change in the terms of payment, opcrnted only as d countcroffcr, which the final tclcgram withdrew. Again, Scllcr loses. 115 The graph displaying these results has two levels, which arc comparable to, the distinct abstraction lcvcls used in llierarchical planning (Snccrdoti 1974, 1980-81). The upper lcve! is a tree in which each node corresponds to the question of what legal characterization to attach to a particular event--in light of the CharactcriLations of any earlier events, as represented along tic path from the root to the node in question. On the lower, more detailed lcvcl of the graph, a separate tree may be nssociatcd with each upper lcvcl node. Ln a dctailcd tree, nodes correspond to questions encountered in trying to redch a charactcri&on of the cvcnt being examined. Thcsc include both hard lcga! questions (e.g., was the July 12 tclcgram a timely rcsponsc to the July 1 ofl‘cr?) and computational choice points (suc!~ as whicll of several candidate bindings for a variable will turn out to be appropriate). Results rcachcd at the detailed level arc summarized at the level above, reducing the combinatorics of the problem. Wit11 some further rcfincmcnt, t!lis summary lcvcl could be the basis for an essay answer to the examination question. \’ KNOWIXIXIS SOUKCISS AND TASK I)~:CQMPOSI’l3ON To produce the analysis graph just described, the program uses what are conceptually three distinct stages of reasoning, each with its own knowlcdgc source or sources. A fourth stage, not now implcmcnted, should eventually be added. The four stages arc described in the following subsections. A. Time Sequencing and Basic Domain Categories Offer-and-acceptance problems require tracing changing legal relations over a period of time marked out by a sequence of discrete events. Reflecting this rcquiremcnt, the program uses an augmented transition network whose states are clcments of the space of possible lcga! relations and whose arcs are the possible ways of moving among them. The current states are: state 0 No relevant legal relations exist. state 1 One or more offers are pending; the offercc has the power to accept. state 2 A contract exists. state 12 A contract exists and a proposal to modify it is pending. Available state transitions include: 0 to 1 Offer 1 to 0 Rejection by the offerec; revocation by the offeror; death of either party 1 to 1 Counteroffer by the offcrce 1 to 2 Acceptance by the offcrce Civcn a problem as simple as “Joe made an offer and 1311 accepted it,” the program would find a contract without ever going beyond stage I. 1%. Legal Rules Attnchcd to each arc is a set of lcga! rules st,lting how the arc prctlicatc may bc foilnc! to be satisfied. Prctlicatcs occurring in the preconditions of rules xc understood to bc technical tcgal terms. ‘1’0 !I~c cxtcnt t!lat thcsc predicates rcprcscnt conccptq to v;hich contract taw gives ;I definite structure, additional rules may bc available to be inbokcd by backward chaining. Thcrc arc also n few prediccltcs that arc tcslct! procedurally. I:or cxamplc, an accc!)tancc must concern “the same b,irg:iin proposed by the off& (Rest,ltcmcnt of Contracts. Second, 5cc. 50, comnicnt a). It is c,tsicl hc~ to apply :I Llomain-dcpclrdcllt mntchil:g prcccdu~ to Lhc contents of two documents or uttcranccs than to state dc&ratively when such a proccdurc would succeed. Within a set of rutcs lending to the same conclusion, two different relationships may hold among the members: the rutcs are either complcmcntary, in that they provide attcrnatc ways of reaching the conclusion, or they arc competing, rcflccting an unsettled \tatc of the law whcrc rules have been formulated but thcrc is disagrccmcnt about what the rule should bc. ‘t’hc possibility that no existing formulation is satisfactory, and that a new rule should bc formulated on the fly, is not now provided for. The choice between competing rules, if it affects the lcga! characterization of an event, is always considered to raise a hard legal question. C. Open-textured Predicates The property of open texture is understood as attaching to legal predicates at which the rules run out--that is, those predicates which lack an attached proccdurc and which occur in the antcccdcnt of some rule but not in the consequent of any. At this point, two main knowledge sources become available: knowlcdgc of ordinary language, and knowlcdgc of legal precedents and hypothetical cxamplcs. With respect to ordinary language, the idea is that the same Itnglish word (and, correspondingly, the same forma! prcdicatc symbol) may have both a technical and a nontechnical sense. The scnscs arc not indcpcndcnt: in choosing words in whicll to formulate a legal rule, OX draws on their ordinary meanings. ‘1’0 dccidc whcthcr a rule applies to a particular case, one may riced to consider both (<I) whether the ordinary usage of its words suggests an answer and (b) whether tcchnica! usage dots or should conform to ordinary usage. In the implementation, a predicate symbol is considcrcd LO have an ordinary or commonscnsc meaning if it occurs in the gcncralization hierarchy described C;II lier. The program’s very limited commonsensc knowlcdgc is cxpresscd by rules of the following kinds: e Rules stating subset-supcrsct relations. o Rutcs stating that certain subsets arc mutually cxctusivc, exhaustive of the parent set, or both. o Rules specifying what slots alw,iys have filters, for what slots the filler is unique, and what can bc infcrrcd about an entity by virtue of it!: filling a particul,!r slot. Q !<ulcs giving mcnning to further prcdicatcs in terms of those occurring in the basi; hierarchy. Deduction using these commonscnse rules may product an answer--but not yet a conclusive one--as to whcthcr the legal prcdicatc is satisfied. As to the technical usage of the legal predicate, it is here that previous casts, actual and hypothcticat, come in. ‘I’hc casts arc thought of as giving a parLid cxtcnsional or scrnantic definition of the prcdicntc: though WC don’t know what its full dctinition by ;I format rutc might bc, wc do know that, under our reading of the CXCS. the f,lcts in ,1rmstrong v. Baker wcrc found to satisfy the prcdicatc and the facts in Cartel v. Dodge were found not to. cxamplcs may be included. As indicated, both positive and negative Rcprcscntation of the casts at two levels of abstraction is assumed. At one level, the facts of a case arc represented similarly to r!:c facts of the input problem. USC of this rclativcly full rcprcscntation is rcscrvcd for stage 4 of the reasoning process. For USC in stage 3, the casts arc represented more abstractly, in the form of simple patterns including only the facts rclcvant to the satisfaction of a particular predicate. In this abstract rcprcscntation, one cast may give rise to several patterns pertaining to diffcrcnt prcdicatcs, and one abstract pattern may derive from scvcral casts. As an cuamplc of the sort of patterns used, consider the definition of acceptance. One antecedent calls for deciding whether the offer permits acceptance by promise (as opposed to acceptance by the offcrcc’s simply performing his side of the offered bargain). Positive examples cover offers that ask an appropriate question or rcqucst an appropriate speech act. A ncgntivc example is an offer of reward: no contract is formed, for instance, when somconc tncrcly promises to find a lost object. Using abstract cxamplcs based on the cnscc, then, exact matches arc sought in the facts of the cast at hand. The cxamplcs may supply a mcnning to a prcdicatc whcrc commonscnse knowlcdgc is lacking; they may supply a technical meaning that supcrscdcs the ordinary meaning; and they may even conflict with each other, as is indicated if both positive and negative cxamplcs arc matched in the data. FIcuristicnlly, satisf‘lction of a legal prcdicatc is considcrcd an easy question--0nc within the program’s compctcncc to resolve--if an answer can bc dcrivcd from cithcr co~nmonscnsc knowlcdgc or cast knowlcdgc or both, provided that conflicting cases arc not found. Ifthc knowlcdgc sources provide no answer or the cases point both ways, a hard legal question has been idcntificd. and a branch point is cntcred into the output graph. D. Arguing the IHart Questions ‘I’hc first till-cc stages of reasoning arc sufficient to product the output graph dcscribcd in section IV, identifying the significant issues in the case. By hypothesis, these arc the questions requiring human judgment. Still, the program might do more. The final stage, which remains for future dcvclopmcnt, would be to produce arguments on both sides of the hard questions. This is the aspect of legal reasoning with which McCarty and Sridharan’s current work (1982) is concerned. In the prcscnt design, the arguments arc envisioned as annotations to the output graph. If their relative merits can be evaluated, the result would be a set of recommendations as to how to prune the output graph. With enough pruning rccommcndations to leave only one path through the graph, the annotations would correspond to one possible decision. VI CURRENT STATUS AND FUTURE I~IRIXTIONS The current program contains all the mechanisms described except for those of section V.D. The transition network (section VA) has 4 states and 20 arcs. ‘I’hcrc arc 14 legal rules (section V.U) defining such concepts as offer, acccptancc by promise, and rejection. The major definitions have from 13 to 20 antcccdcnts, sotnc 40% of which have attached examples. There arc about 125 commonscnse rules of the kinds mentioned in section V.C. ‘I’hc knowlcdgc processing of the test problem quoted above. suffcicnt flu The tcct problem has been comcwhat sitnplificd. Some concepts have been omitted for lack of a good rcprcscntation, notably “immcdiatc” and “ignore.” l~~ortunntcly nothing in the analysis turns on the prcscncc of thcsc concepts. As nnothcr Gmplificntion, the dialogue has been rcconstructcd as if it consisted of complclc scntcnccs. The program products 9 analyses of the problem; that is, the summary lcvcl of the output graph cuntnins 9 paths from Uic root to a terminal node. ‘l‘hc first 7 paths correspond to the 4 analyses listed at the beginning of section IV. (The 3 extras arise because the program is not yet nblc to conclude that treating the first tclcgram as an cxpircd offer is cquivalcnt, in this problem, to treating it as a preliminary inquiry.) The remaining paths rcflcct the possibility that the first two tclcgrams are both only prcliminnry negotiation. ‘I’hc Icg,il question Lhat raises this possibility is whcthcr they stdte the tCI’illC of the snlc dcfinitcly enough for a court to cnforcc them. Much of the programming effort has gone into ‘Ivoiding a combinatorial expiosion of altcrnativcs. At the dctailcd lcvcl of the output graph, the trees generated in chamctcri/ing a single cvcnt may have half a dozen tcrminnl nodes. kforc tlic pr(JgI’~llll goes on to the next cvcnt, it combinc$ thcsc at the summary lcvcl, usually reducing thcrn to a single two-way branch. Within the dcL?ilcd lcvcl thcrc is also a potential for unncccssary computation. To charactcrixc the current event, it may be ncccssary also to access (1) assertions belonging to previous events and their intcrprctation by the program and (2) propositions that arc cmbcdded in assertions about the current cvcnt and whose truth value is unknown. MRS has a context mcch~nism that makes it easy to create distinct worlds for asserting the latter propositions hypothetically and for segregating bclicvcd assertions into groups that can be mndc available or unavailable as dcsircd. The trick is to find general formulations for which contexts should bc accessible at any given time--in order to product the correct matches without scvcrnl superfluous ones that are bound to fail later. Expcricncc is gradually yielding the needed formulations. At present, the processing time for the more complicated events--those including documents whose content must bc analyzcd-- ranges from 2 to 6 minutes. In summary, the program is very close to analyzing the test problem satisfactorily, and the gcncral design continues to seem appropriate. To permit stronger conclusions, the knowlcdgc base will have to be enlarged considerably. Marc legal rules should be added to remove artificial restrictions on the kinds of problems that can be handled. For example, rcasonablc coverage would require knowing about accepting an offer by a nonverbal act or cvcn by doing nothing. Most importantly, tnorc cxamplcs arc needed at the technical-commonscnse boundary. These can come incrementally, from the casebooks. ‘I’he fact to bc taken advantage of here ic that court decisions do more than resolve the hard issues in lit@tcd cases. They also dcscribc the contexts in which thcsc issues arise and, in doing so, provide a rich source of information about the nonproblcmatical aspects of cases, on which commonscnsc knowlzdgc ;:nd technical knowlcdgc agree. With thcsc cnl:~rgcmcnts, a sharper critique of the program will bccomc possible. It will then bc time to consider some difficult long- r‘lngc problems: What changes would bc ncccssary to cnablc the 117 program to rcacon about cases involving mistake or misunderstanding bctwccn the parties? Ha\;ing identified the significant issues in a case, how collld the program then go about reasoning from dctailcd &scrip[i(J!ls of the prcccdcnts to produce argimients on both sides of ihcsc questions? Buchanan, Bruce G. 1982. “New Rcscarch on Expert Systems.” In J. E. Hayes, Donald Michic, and Y-II Pao, cds., filcrchi~le I~felligence 10 (New York: IInlstcd Press, John Wiley & Sons). Davis, Randall. 1982. “Expert Systems: Where Arc WC? And Where Do We Go from Hcrc?” AZ hlaguzirle 3(2), 3-22. Dworkin, Ronald. 1977. Taking Rig/& Seriously. Cambridge: Harvard University Press. Fuller, Lon I_. 1958. “Positivism and Fidelity to Law: A Reply to Professor Hart.” Harvard Law Rcview71,630-672. Genescreth, Michael R.; Grcincr, Russell; and Smith, David E. 1980. “MRS Manual.” Memo HPP-80-24, Stanford Heuristic Programming Project, Stanford University. Hart, H. I,. A. 1958. “Positivism and the Separation of Law and Morals.” Harvard IAW Review 71, 593-629. Hart, H. L. A. 196 I. TIE Concepl oJ’Law. Oxford: Clarcndon Press. Levi, Edward H. 1949. An Inntroduction lo Legal Reasoning. Chicago: University of Chicago Press. I,lewcllyn, Kar! N. 1960. The Common Law Tradition: Deciding Appeals. Boslon: Lit&, Brown. McCarty, L. Thornc, and Sridharan, N. S. 1982. “A Computational Theory of Legal Argument.” LRP-‘1’R-13, I .aboratory for Computer Scicncc Research, Rutgers University. McCarty, 1.. Thornc; Sridharan, N. S.; and Sangster, Barbara C. 1979. “l’hc Implcmcntation of TAXM/\N II: An Experiment in Artificial Intclligcncc and I .cgal Reasoning.” I .RP-‘I’R-2, I.aboratory for Computer Science Rcscarch, Rutgers University. Mcldman, Jeffrey A. 1375. “A Preliminary Study in Computer-Aided I.cgal Analysis.” MAC-TR-157, M.I.T. Moore, Michael S. 1981. “The Semantics of Judging.” Soufhern Califiirnia Law Review 54, 151-294. Moore, Robert C. 198 1. “Problems in 1,ogicaI Form.” In Procee&qs, 10111 Annual h!eeling. Associaliotr for Computalional I,inguislics. Pp. 117-124. Putnam, Hilary. 1975. “The Meaning of ‘Meaning.‘” III Keith Gundcrson, ed., I.~rr~gung~, Mind, and Knowlctlge, Minnesota Studies in the Philosophy of Science, vol. 7. Minneapolis: (Jniversity of Minnesota Press. Pp. 131-193. (Reprinted in I-I. Putnam, I’hilosophic~rl Pq)ers. vol. 2. Alitld, I.anguuge and Realily. Cambridge: Cambritlgc 1.Jnivcrsity Press, 1975. Pp. 215-271.) Res[atetllerrl of Ihe I,aw. Second: Conlracts 2d. 1981. 3 ~01s. St. P~IJ: American I .;IW Institute Publishers. Saccrdoti, Earl D. 1974. “lhlning in ;I IHierarchy of Abstraction Soaccs.” ftrlificiul Iiireilixence 5, 115-135. Sacerdoti, Ear1 D. 1980-81. “Problem Solving Tactics.” AI Magazine 2(l), 7-15. Schank, Roger C., and Abelson, Robclt P. 1977. Scrip[s, Plans, Goals and Underslanding. Hillsdalc, N. J.: Lawrcncc Erlbaum. Schwartz, Stephen P., cd. 1977. Naming, Necessily, and Nalurul Kinds. Ithaca: Cornell University Press. Stcfik, Mark, et al. 1982. “The Organization of Expert Systems: A Tutorial.” Ar/ificial Intelligence 18(2), 135-173. Waismann, Friedrich. 1945. “Verifiability.” Proccetlirigs of Ihe Arislorelian Society, Suppl. 19, 119-150. Reprinted in Antony Flew, ed., I,ogic and I,anguage: First and Second Series. Garden City: Anchor Books, 1965. Pp. 122-151. Waterman, D. A., and Pctcrson, Mark A. 1981. “Moclcls of I.cgal Dccisionmaking.” Report R-2717-ICJ, Rand Corporation, Institute for Civil Justice. Winograd, Terry. 1980. “What Dots It Mean to Understand I,anguage?” Cognitive Scierxe 4, 209-241.
1983
4
233
DEFAULT REASONING USING MONOTONIC LOGIC: A Modest Proposal Jane Terry Nutter Department of Computer Science, Tulane University This first order paper presents, a simple extension of to include a.default operator. P rodicatEflog+c iules inference s ecified, overning the operator are ? and a mo % el sen ences involvin theory for interpretin is develo e s % based on standard 4 default operators arskian semantics. The resu ting system is trivial1 ar sound. It is argued that (a) this logic provi es an adequate basis for default reasoning in A.I. s stems, and (b) unlike most i! t is purpose, retains the lo its proposed for Egic , 1 v rtues of standard first order including both monotonicity and simplicity. Reasoning from incomplete information and from default, generalizations follows P atterns which ;s?Fctd f+rst order predicate o ic ! does not making ' l?he most sttz;;ing inferences devia,ion i;;;lv;t conclusions counterindicated by further information which does nc&explicitly contradict anything previously . In standard logics, if a set of premises entails a conclusion, those premises also aneY,tt; ;r ? set containing all that conclusion. Logics with this P roperty are called monotonic. The above departure rom standard logic's reasoning patterns has led researchers to adopt and dev;t;zg non-monotonic logics for use in A.I. systems i% Tier Mcf;gtott and Doyle,a;9!JOigaMgCDermott, 1982; Aronson et 19713; &nd others). ; Duda et al., Critics of non-monotonic logics out technical weaknesses (see e.g. have p;ignStd;d Davis, and Israel sup orters (19UO) argues persuasive1 % confuse logic with complex ju gments 5 that itd of kin s logic cannot deal with. Dut unless some alternative with default reasoning, appears for dealing non-monotonic appeal. logic must continue to I have argued elsewhere that (1) default reasonin only appears non-monotonic if we fail to distin u sh 9 ? warranted assertions from warranted assump ions (gi;y~, 19821, and (2) an adequately default Y log;zs;hi;; monotonic (Nutter, 1983). reason ng If this is accepted, a conservative approa:; promises more than do radical ones, not because 1s "safer", what is of bu;apurusa it simultaneously preserves in standard lo ic -- its simplicity, clarity, and complete information ade uacy 4 n domains of -- and warranted a lows distinguishing P assumptio2;omfrom warranted assertions (generalizations universals, def;E consequences from genuines inferences, etc.). pa er presents an al~~~iEtrivha~chBxtenslon of first % or er predicate generalizatii;; and the resulting distinguishes universals their resumptions from reasoning from default and permits resultin 8 proposition is- obviously related to its componen in an interesting way, but its truth value is not a function of the component's. This paper describes how to extend standard first order predicate logic to a logic for default reasoning. appro ;;;te This involves four P (1) We must characterize the grammar of t:e extended langua e. f That is, we must give rules for determining whe her a particular string of symbols re resents a pro osition 4 P in the extended (L We must exp ain lan uage. 2 system of the original how to extend the dtn&;;e desirable inferences logic so that all will be legal. r descruEtthe seyh;ycs for the lan uage: Enguage say 9 '3'th%? m?: the it:;rpre ations of th;! are, how values the inter retations P are determined, and ho?logical entai ment is defined. (4) we should investi extende % at; ;;a,least, Finally brief;i the me;;;: ;c of the K . That ;I ourselves t at the new exte6deiesystem is %%: it never permits false conclusions to be derived from true premises. We now take up these tasks. III w 4 The first component of a formal logic ;; its rammar, which determines the langua e ogic, which ma E be defined as either e the he set of formulas or t e set of propositions which are considered well formed. Formulas are distinguished from propositFhyt by containin open variable occurrences, lS, % either be replaced b placehol ers which could 1: specific individual constants -- names -- or be ound universal quantifier -- by an existential or a turned into an 8 uivalent of either "something" or "everything" -- % ut which 297 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. at present occur unbound. In this section, we present the our logic. rules of grammar for the language of The technical discussions in ,the sections below will ways that ~~~~9Po~~~c"~11~~~1~~gc~~~~d~h~ndu~~~~ presuppose a complete escription of a first order predicate logic. The informal descri tions conve Y overview for readers ar!?y and motivate R should w o lack this famil some of the less obvious decisions embodied in the technical discussions. In The grammar holds relatively few surprises. sim le it reasona 1 terms, wherever in English one might believe T prefix a clause with "There is reason to hat" or some similar construction, the logic will ;llow its e u,;Fuptnt % of the clause to be governed OUT (for presumably" . Y operator p The only real decision involves whether to let the operator govern formulas or only While its use governing formulas !f. ropositions. eliminable, it seems to me both innocuo6us and probabl we1 Y motivated. While "Birds becomes Ya(Bird(a) * p % E,e23 i3hl; f%%"l logic, robabl 9 eneral !a P comes to *the same thin ? as "In he irds f y;lo; long run, R Ya (Bird(a)) 3 Flies a))l,,Iifn are said. Furthermore, anything is a bird, resumabl flies" then it has wings and g efore i Y must be can be considerably reformulated having p bind a equivalent1 formula expressed without ins ead of a sentence. z Hence p may directly govern either a formula or a sentence. B. Let L be a standard first order predicate calculus without equality and without function SymbOlC3~ system with a Jaskow7p3i$tyle natural deduction (Jaskowski, ' containin introduction and elimination rule; for t l? explicit e standard connectives and quantifiers small set of "bookkeeping“ rules. unimportant.) The (The precise f&ma1 system is langua e of L is L, the logic with defaults which we wi i! 1 build from L is LD, and its language is CD. written p. The default operator is Throughout the discussion below, Roman ca wh r itals var le tireek Y over formulas and propositions of L, etters Q and 0 var over formulas and E ropositions of LD, and a var es f: of variables of D. Def. The set of formulas of LD is satisfying the following clauses: the smallest set (a) All atomic formulas of L are formulas of LD; (b) For any formula 9 of LD; of LD, p4 is a formula (c) For any formulas O,UJ of LD, the following are all formulas of LD: -4; 4 v *; Oh*; 43 9; 4 E 0. ( d) For any formula 4 of LD in which a occurs gpen , the following are also formulas of LD: Ya 4; 3a 4. LD For any formula 4 of LD, (0 is a proposition of if and only if 4 contains no open occurrences of variables. IV -VI. SYS'&,M .' . As commented above. the deductive system is a standard natural deduction system, with the usual five connectives ("not", written , ,l*M "and" , written "A " P "or': ,, written "v ", "if...then...", written "*"., and "if and on1 Y if", written "z") and both quantrfiers ("for al 'I, written "#+" I and "there is", written M3"). This will be the least altered portion of the logic. A. rnfarmaf Descr&.U?n The guarded status of default generalizations inherited through inferences, and from a ~%d~~ proposition, it must be possible to infer a guarded version of any conclusion which could be inferred from the un uarded version of the premise. That is, if you coul % "If my do4 sees infer from "Roger flies" t;M; Roger, he will find fascinatin then from "Presumably Roger flies" you may in?& "Presumably, if my dog see;loR;ge;,, yt will find him fascinating ," and so,on. generally takes more thatfone premise.to warrant ye inference. How many eneralizations? 9 There are the premises may several ways to go on his. We could allow only one guarded premise per inference. then 8," and But suppose we have "In general, if A we have "There is reason to believe that A." We would normally feel free to infer "There is reason to believe that B." But this inference has two z uarded premises, and without further information, here is no wa both guarded premises in a sing e inference an 1 to avoid usin % still get the conclusion. contributing to it. to give not only a guard, but also a sort of certaint Y measure. Unfortunately, it's a very poor measure or anything interesting. Here's why. Second, a single guarded roposition might be used five or six times in t e course of a R long derivation. Is the result any less certain for having that proposition enter man than once? B taken place, T the time a chain of Y times ra;;th; in erences he number of ps on the front ma &%%I~ ~%%v%%d p. the number of dis in:: Y in reaching it. Third, there is no reason to sup ose that all the original 9 eneralizations are equa ly P reliable. Five extreme y strong generalizations probabl x provideIfbe~'ger c~yyr~ss;h,"" w,o=h~~l~':,";;~c~y=g one. reliability -- i.e. dealing with probabilities -- we would be statist cal generalizations and not with defaults anyhow. Furthermore, from "Generally, if A than B" and "There is reason to believe that A," we don't conclude "There is reason to believe that there is reason to believe that B." We conclude that there is reason to believe B. Stringing out ps does not reflect an that prac ice playe Y practice % resent in English usage. If reasonin we a role in the kind of are Seasonab 9 e to I have su pose ??z? l~~gu~~~e~~uldltresleeecm $&d if any or t erefore R decided to "inherit" the all remises involved it, without re ard to how many. 5 his is the force of the first ru e of inference below. 9 Even with the rule in that form, it would be possible to generate long strings of second rule says that if there are more t in R The one consecutive qualifications, they may be collapsed into a single one. 298 B. Devw We retain all the standard rules of inference in L (generalizing them to allow for sentences of LD and not just of L), rules. and add the following two rtpx: Su pose that UJ = {@l,...,*n) r LD, e' = {WI' ,...,vn' s n, vi' = oi t LD, 4 t LD and for all I, 1 i 1 5 or Qi' = pUri. ose further that some rule of inference warrants Sup !i in erring 4 from Y. Then from rlr* you may infer po. RpE: From pp4 you may infer p4. Given 4 s LD, UJ e LD, we sa I! that ry is provable from 4 (or 4 roves 1(1, writ en 4 + ur) ! rovided that 01 can be @rived from lf remises in 4 ollowing the rules of inference in P he usual and obvious way. This is the portion of the logic which deals with issues of "meaning" and truth. The sense of meaning involved is extremely primitive. For the non-logical terms, words as it reflects only the function of referring completely ne ? letting nat" oiZ?~Aov" r to objects, to wha they refer in hey refer, but even Formal semantics are formal: ordinary English. can be said they deal with what about inheritance of truth conditions on the basis of out logical form alone, which throws the overwhelming ma ‘ority of what we would ordinarily mean when we ta 2 k about meaning. The semantics 9 iven below re resents the most complete departure rom standard P in LD. The standard core of the ogic semantic properties, system retains the familiar but across the portion of LD which is not in L,, the logic is three-valued. This can be represented in man I have chosen to represent the truth Y ways; va ues as non-em ty subsets (instead of elements) of the set {t,f) o P familiar truth values. Intuitively, (t) means "on1 Y true", pd occurs where in ordinary ltgic one wou,,d expect similarly "Gewcomer" (f) means only false . is {t,fa;d w",',"\ can best be thought % *~~ke~w??$hi,y or both true and false. LD p;ovides an intriguing face for lovers of radical ;;;artures: it remains sound while violating traditional law of non-contradiction. . A. &a&a&&g lruth of !&es- What does it mean for a default enoralization to be true? It does not mean that 9 he statement inside the p is true. It does not even mean that there is no reason to believe that it is false: means something like "There is reason pt"o ge ig:e that 4 is true but <here is also reason to P believe that it is 'false." That sentence is consistent if unhelpful. Part of what we want our default operator to mean which cannot be modeled b formal semantics: neither semantic c",u,w~i~y Y nort typica ity is a formal R state constraints on ow our p op~~at~~works. certain An (If we x true proposition is evidence for itself. new that 4 is true, thenT;t w~~l~ccertainl be willing to suppose it. % canno T distinguish the truth value of p4 epending on whether we know it or not.) Hence if 4 is true, then p4 should be at least true. both, only true, If 4 and ~1, are the;= clr~~ly .thefr conjunction ??lzid b~he%?Yc~~ction ' if either should be is only only false. ~a~s;wiseSi~~la;on]unction sh;i;g b;oot;hrue and . comments other connectives. least true for false V. that e(4) = A. In particular, having defined the evaluation function over atomic ~K&Y;~F ~ Rroposttions. of LD in the (exce t for t e sub,titution of if) and and ~ ), we use the following induction clauses: 1. For L, all 4 e LD, if 4 has the form pA for A e then e(A) 2; e(4). 2. For all 4 e LD, if 4 has the form ppv for 0, s LD, then e(o) = e(pur). some 3. For a;: tj 0, s LD, we have %a = {f) if e(4)=(t); {tl if e(4)={f); {t,f) if e(4)={t,f1. et4 A 0) = et4 v rlr) = if) if t 6? e(4) or t d e(u); ttl if f e! et41 and f d s(v); (t,f> otherwise. {f) if t e! e(4) and t s! e(o); {t) if f d e(4) or f d et*); {t,f) otherwise. e(4 =a 0) = <f) if f (r e(4) and t d e(u); {tl if t d e(4) or f 4! e(u); (t,f) otherwise. et4 f -4) = {f) if et41 n et*) = A; {tl if e(4) = e(v) = {t) or e(4) = e(uu) = {f); {t,f) otherwise. 299 e('da 4) = if) if there is a u c U such that e(4:u/a) = (f); ttl ~~4four/(xa:l~ u{;, U, I ; {t,f) otherwise. e(Zla 4) = tf) ie;4fo;,~:l=u~fG> U, ttl if there is a G & U such that e(4:u/a) = it); {t,f) otherwise. 4. If there is a 4 c ED such that e(e) = if) but t E e(po), then for all UJ E: LD, f e a(~*). (Notational remark: e(4:u/a) means the result of evaluating 0, treating a as if it were a constant and e(a) = u.) The standard equivalences from these definitions. follow trivially Now we define satisfaction as follows: for all i = (U,e) an inter retation of LD, and for all 4 c LD, i satisfies 4 P written i (= 4) if and only if t s e(o). As has been characteristic we indicated are most from the outset, the concerned with for logics in ~o~~h;~ im lementations 'c soundness. lt'?Tthe ollowing follows trivially from i! % theor% he first order predicate logic: soundness of standard Theorem The logic LL, is sound, i.e. for all UJ E; LD, 4 c LD, if v k 4 then 0 b 4. VII A. of rem As noteH;tlc;bove, the features result ap arently f radoxicai rom truth-functional view of implication cat %%&ded g iyng a deductive s . Such a system based on relevance deductive in erences which are a system will subset permit of those allowed under the tradition%"%iw presented here. Martins ;;,983)b;;;eleveloped,a variant of relevance logic revision, which has been implemented in SNePS (Shapiro, 1979). An inference s stem including default operators as described in K t is paper is being im lemented on this belief revision Limita ions of space prohibit :;;'+;;,dis~%%?%~ here; e. for details, see Nutter . B. OB+r It is imnortant not to overestimate what a logic for defaiilt ;f$yOying can do once we have one. As Israel oints out, useful generalizations an ii among conclusions of default resolving arriving at conflicts reasonin both exceed the sco e of logic. the sys em will be once it has deduced '34 A p--4",, r To those who won er what help ? the answer is, none at all. ortion of If the non-logical system contains heuristics ' P hat confl cting r our evidence indicates a neEzyf,",9 further investigation, and if your s stem further contains some subsystem for investiga Y- abilit ing, then the be use x to deduce statements of the kind above can to trigger investigation; beyond the logic alone. but this goes It does not such a loaic is will not constitute such a tool for the system to use. system: instead it is a The logic presented here makes no pretense to philoso R hical depth. As a logic, it is trivial; as a p ilosophical view perhaps the shallowest ofos~i~l~lization. it is P . But it can be used to form non-trivia views. systems modeling deep is for. From one point of view, that is what logic VIII m . ‘2 2 '.* Many thanks to Stuart Sha R iro and to the SNePS Research tiroup for their many suggestions. elpful comments and This research was carried out in the Department of New York at Corn uter Science, State University of Buffa o, s Amherst, New York. REFERENCES Cl1 121 133 I41 (51 [61 I73 I81 I91 t103 1111 Cl21 1131 Aronson, A. R., Jacobs, B. E., and Minker, J., ;'gnoQcoe3 on fuzzy deduction.'* JACH 27:4 (1980) . Davis, reason 80. "The mathematics of non-monotonic Artif. Intell. 13: l-2 (1980) 73- Duda, R. O., Hart, P. E., Nillson, N. J., and Sutherland, G. L., "Semantic network repre:en- tations in rule-based inference Pattern - systemsD IA" Directed Inference Systems, Waterman and F. Ha es-Roth, editors, Academic Press (New York, ?i 1 78) pp. 203-223. Israel, D.J., "What's wron with non-monotonic 3" Proc. AAAI-80. ;:;I%, 1980, pp. 9 Pa 0 Alto, California, 99-101. Jaskowski, S formal logic:" "On the rules of Studia Logica 1 Pi osition PY 4). in Martins, J. P., Spaces," "Keasoning in Multi le Belief sit Technical Report 203, Sta e Univer- P Yor z of New York at Buffalo, Amherst, New , June 1983. McDermott, D. V. and Doyle, J., "Non-monotonic logic I." Artif. Intell. 13:1-2 (1980) 41-72. McDermott, D., "Non-monotonic logic II." JACiY 29:l (19823 33-57. Nutter, J. T., "Defaults revisited, or "Tell me if you're guessing." Proc. Cog. Sci. 4. Ann Arbor, Michigan, August, 1982, pp. 67-69. Nutter, J. T.! "What else is.wrong with non- monotonic logics? Represe;po;ional and infor- mat ional shortcomings." Rochester, New York, May, 1983: Cog. Sci. 5 Nutter, 9. T., "Default terns” (1983b) draft. reasoning in A.I. sys- Reiter Artif. ‘A&l “A . 1 for default reasoning." 1980) 81-132. Shapiro, S., "The SNePS semantic network pro- cessing system" In Associative Networks, N. V. Findler, ed., Academic Press pp. 791-'/96. (New York, 1979) 300
1983
40
234
DATA DEPENDENCIES ON INEQUALITIES Drew McDermott Department of Computer Science Yale University A6stract: Numerical inequalities present new challenges to data-base systems that keep track of “dependencies,” or reasons for beliefs. Care must be taken in interpreting an inequality as an assertion, since occasionally a %trongn interpretation is needed, that the inequality is best known bound on a quantity. Such inequalities often have many proofs, so that the proper response to their erasure is often to look for an alternative proof. Fortunately, abstraction techniques developed by data-dependency theorists are robust enough that they can be extended fairly easily to handle these problems. The key abstractions involved are the “ddnode,” an abstract assertion as seen by t,he data- dependency system, and its associated “signal function,” which performs indexing, re-deduction, and garbage- collection functions. Such signal functions must have priorities, so that they don’t clobber each other when they run. 1. The Problem Programs in the field of artificial intelligence often deal with sophisticated (if small) data bases. These data bases can contain predicate-calculus formulas of greater-than- usual complex&y, and often perform deductions of new formulas. Such a data base is used to keep track of an evolving problem solution, during which tentative assumptions are made and withdrawn. Since the data base is responsible for routine deductions, it ought to be responsible for undoing them when assumptions change. A useful mechanism for aiding this process is the data-dependency note, a record of the reasons for belief in a formula. In the simplest case, the dependency note records that a formula follows from a set of other formulas. The note is usually diagrammed as a circular node, with arcs from the ju,t;fying beliefs to the node, and an arrow from the node to the supported belief, or &tifo’cand. See Figure l-l. When the justificand is inferred, it is the responsibility of the “inference engine” to record this dependency. When a justifier is erased, the justificand must be erased as well, unless there is an independent justification for it. Detecting this is done by a reason maintenance system (RMS). [Doyle 801 A no th er subtlety is that a belief may depend on the absence of another belief. More on this below. *This work was supported by the National Science Foundation under contract MCS-8203030 Figure l-l: A Data-Dependency Note Special problems are raised when you try to extend these methods to numerical inequalities. Such inequalities arise in the domains of spatial and temporal reasoning. Often a series of facts about objects and events can be captured as a set of fuzzy maps, in which the coordinates (and other parameters) of objects are known to within an interval. For instance, if an event E2 occurs between 5 and f3 hours after event El, we can record that the T coordinate of E2 in the frame of El is [5,6]. (Assuming the scale of El is “1 hr per time unit.“) I will write such “coordinate terms” as (T E2 El). The coordinate values interface to other facts in a natural way. For instance, (T E2 El) = [5,6] may have been inferred from these facts: (1) El is the beginning of a painting episode EP. (2) E2 is the end of EP. (3) Paint takes between 5 and 6 hours to dry. And, in turn, from (T E2 E1)=[5,6], we may infer other facts, like this: If El occurs at 4 o’clock, then E2 will occur after sunset. I call a statement like (T E2 El) = [5,6] a term due statement. (I will focus on terms of the form (parameter object frame) in this paper, but many of the same considerations apply to terms computed from these, like (DISTANCE A B FRAME).) The standard RMS algorithms are inadequate to update data bases involving term-value statements. The problem is that RMS algorithms simply decide whether to keep or delete a belief. In the case of an inequality, it is often worthwhile to invest some effort in recomputing the quantities involved, and see if the inequality is still true. If it is, everything it supports can stay around. Another problem is the logical status of term-value statements. We can distinguish “weak” and “strong” int,erpretations of such a statement. The weak interpretation of P=[l,h] is that the true value of P lies in the interal [I, h]. (In most applications, it is irrelevant whether the endpoints are included in this interval, except for point intervals.) The strong interpretation is that no From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. narrower interval call be inferred. Weak interpretations are more common, but there are cases that require the strong interpretation. Strong term-value statements are closely intertwined with non-monotonic inference. A strong statement depends upon the ubsence of any more informative weak value statement about the same term. As I mentioned, such dependency nets have already been investigated. They may be diagrammed by adding “signs” to the arcs from justifiers, indicating the sense of the dependency. For instance, Figure indicates that C is believed (“IN”) if A is IN and B is absent from the data base (“OUT”). But the relationship between weak and strong values cannot be handled like- this, because there are potentially an iufinite number of weak values that could impact on a Figure 1-2: A Non-Monotonic Data-Dependency Note strong value. The final problem to describe is the origin of term-value statements. JVe can distinguish two sources: computational and non-computational. The former label applies to term values compu-ted from other term values; the latter, to all other inferences. A computational inference involves a fact of the form to = f(t*, . . . I tJ where t,,..., t,, are numerical-valued terms, and f is a #U computable function. I will use the word derivalkon for such an equality. Note that a typical equality gives rise to several derivations. E.g., A=B+C gives rise to three: A=B+C B=A-C C=A-B I will not worry about generating derivations from other equalities, but just assume derivations are generated in this form when required. Derivations are like St,eele and Sussman’s “constraints” [Sussman 801 with one important difference: they are supposed to be applied using interval arithmetic. That is, if B=[3,4] and C=[-2,-l], then from the derivation A=B+C, we can infer A=[1,3]. In general, nothing prevents the creation of circular derivations, which may be difficult to make use of. Such a derivation cannot be used by substituting value of right- hand side quantities, but may require solving simultaneous equations. In this paper, I will assume that such derivations do not arise; that is, that the only circular derivation that can be obtained by substitution is of the form A=A. This is a reasonable assumption in our spatial-reasoning domain, where drrivations come from coordinate transformations, which do have this property. If you translate the X coordinate of A from one frame to another and back, you wind up with the term you started with. The spatial-reasonin, w domain is simpler in this respect, but more complex in others. One complexity is that there are in the abstract an infinite number of derivations for a term; any term can be derived from the value of that term in another frame of reference, plus the values of the relative coordinates of the two frames. Even in one dimension, we have (X A F1) = (X A Fp) (SCALE F, F1) + (X F, FF,) for all objects A and refercncc frames F, and F,. But at any time, most such derivations will give useless answers (very wide intervals). We must provide heuristic means of finding good derivations of a quantity. In summary, therefore, our problems are these: 1. Avoiding erasing inequalities after minor recomputat~ion 2. Managing the non-mouotonic relationship between weak and strong term-value statements. 3. Keeping track of useful derivations of quantities. I should point out that none of these problems is found only in numerical inference. The methods described here may be adaptable to other applications. But the numerical situation makes the problems obvious and urgent. 2. §01ud;i0n The solution is more a matter of careful bookkeeping than blinding insight. Many of the techniques needed are already known from t,he work of Doyle and McAllester. The trick is to stack up the layers of abstraction just right. One abstraction we need is the memory term (memterm), a data st,ructure to which is attached all the information about a numerical-value term, like (X A B), that has been computed in t,he past, and whose values are being kept track of. (There must be an indexing scheme to get YOU from the object A to this memterm, but I will neglect this issue.) Data dependencies run between “propositional” objects (objects that have a truth value). Memterms are not propositional, but they have several propositional e&ties associated with them: e Weak values (u*vals): zero or more assertions about intervals this mcmterm lies in. The intersection of the intervals for all IN wvals must be non-empty, and represents the “current value” of the memterm. If there are no wvals, then the memterm has some (large) default fuzz as value. The default for a scale might be [0, lOOOO]; for a position, [-10000, lOOOO]. 0 Strong values (svnls): zero or more assertions about the narrowest interval this term is known to lie in. At most one may be IN at a time, and 267 its interval must be the current value of the memterm (the intersection of the IN wvals’ values). o Others to be described below. From the RMS’s point of view, propositional objects are called “ddnodes,” and are thought of merely as things to attach data-dependency notes to, and to slap IN or OUT labels on. This abstraction, due to Jon Doyle, permits the RMS to handle a wide variety of assertion types, and not to be committed to any particular format for them. It raises the following problem, though: how can the RMS know how to do the bookkeeping (e.g., assertion indexing) associated with bringing a ddnode IN or OUT? Doyle’s answer was to provide each ddnode with a “signal function,” which is called by the RMS whenever a node changes label. The signal function knows how to do bookkeeping, so the RMS doesn’t have to. So wvals, svals, and other propositional objects are implemented as ddnodes, with signal functions to do their bookkeeping. These ddnodes can support other ddnodes, that also have signal functions. For instance, AE[5,6] might support A>4. If the wval is erased, the signal function for A >4 might ask to recompute A, see if A>4 is still true, and if so, resupport it with the new wval. Whenever the RMS adjusts the IN/OUT labels, signal functions will go off all over the data base, like popcorn (or like “demons,” [Charniak 721). For instance, suppose we have three ddnodes like those in Figure Nl (INCOME FRED) E [60000, lOOOOO] i N2 (INCOME FRED) > 50000 I N3 (RICH FRED) Figure 2-P: Nodes Being Updated The signal function Fl for Nl might say: If I go OUT, free my storage. The function F2 for N2 might say, If I go OUT, recompute (INCOME FRED) and resupport me if necessary. The function F3 for N3 might say, If I go OUT, remove me from the data base. Suppose Nl goes OUT. Then all three signal functions get called. The resulting chaos could cause inefficiency or even error. Suppose F3 goes first, and unindexes N3. Then F2 might decide N2 should be IN after all, which would require 1~3 to be re-indexed. The solution is for most signal functions to do nothing immediately, but queue up functions to do the real work on a system of queues (or ‘{agendas”) with different priorities. In the example, F2 should get higher priority than F3. That way, by the time F3 runs, F2 may have already patched things up so that F3 need do nothing. One way to think about this (due to David McAllester [McRllester 821) is that each queue maintains an “invariant,” an assertion that is true when the queue is empty, as it normally is. Lower-priority queues can thus assume the invariants associated with higher-priority queues. F3 can assume that “all OUT inequalities are really unsupported,” because signal functions like F2 have ensured this. Intuitively, the job of a queued function is to bring its queue’s invariant “closer” to being true. When the queue is empty, the invariant is true. A large part of applying a data-dependency system is choosing queue priorities. In particular, for the application at hand, we are going to need a queue level at which signal functions can call for memterms to be recomputed. Recomputation can mean two things: re-applying existing derivations, or finding new ones. I will say more about this shortly. with a Figure 2-2 shows the ddnodes associated memterm. ~ed.etb 3 A G c 1) 33 -b-l c ‘3 Figure 2-2: Derivation and Wvals I use a triangular node for an active derivation, that is, one that has been generated and stored in a memterm. The inputs are connected to the base of the triangle, the output to its apex. A derivation is an assertion (i.e., it looks like a ddnode to the RI\/IS), which connects memterms. Memtcrms have wvals, which are also assertions. A wval will be supported by a dd-note which includes the derivation used to compute it. In addition to the ddnodes shown in the figure, there are some other useful propositional entities associated with memtcrms. First, we need a ddnode, the derivedness-node, which asserts that every useful derivation is IN. We need a second ddnode, the completeness-stode for the memterm, which asserts that every IN derivation is actually contributing a wval. In addition, the completeness node depends on the the derivedness node. Maintaining the derivedness ddnode is the responsibility of the modules using the memterm system. The first time the value of a memterm is asked for, all its derivations are found (usually heuristically), and the derivedness node is brought IN. It usually stays in from then on, but outside modules can force it OUT if they become aware of new derivations. Maintaining the completeness ddnode is the responsibility of the memterm system itself. Whenever a new wval is added to a memterm, then all of the memterms that are derived from it are marked “incomplete.” That is, their completeness node are forced OUT. Whenever the value of a memterm is sought, if it is incomplete, then its derivations are used to re-derive new wvals, and it is marked complete again. Note that this ddnode is being used essent,ially Boolean flag. Why not just use a Boolean, and save as a some storage? Because a ddnode can participate in data dcpcndcncies; that is, it can support other propositional till ities. In the present context, the main propositional entity is the strong value (“sval” ). Most memterms will not have a strong value most of the time, but if one is needed, WC create a ddnode for it, and support that ddnode with the extant wvals, plus the completeness node for the mcmtcrm. Normally, not much happens when a wval or sval goes OUT. But. it is simple in this system to attach a signal function to a ddnode, such as an inequality, supported by such a value. Such a signal function can ask that an attempt to recompute the memterm be made. The result will be that all wvals will be up to date, the completeness node will be brought IN again, and, if necessary, a new sval will be created. The user of a numerical inference system usually does not want to perceive it as numerical as such. He would rather treat an inequality as just another assertion. In the classic treat’mcnts of data dependencies [Doyle 79, deKleer 771, it is assumed that all likely proofs for an assertion are represented explicitly. If any is unsatisfied, then the assertion should be deleted. This is, of course, not true for beliefs about quantities, which often have an infinite number of equally reasonable potential proofs. X > 4 may be believed because X = 5, or X E[6,7], or . . . . We cannot generate all possible proofs, but would like to make it look to many inference programs as if that’s exactly what happens. This suggestIs organizing the queue hierarchy as follows: High priority: recomputation of t*crms supporting important inequalities Intermediate priority: User’s queue levels ,.... Low priority: Storage allocation; in particular, reclaiming storage of OUT ddnodes The reason for giving term recomputation a high priority is so 1 hat a term will be recomput,ed if possible before anyone examines it. This allows us to simulat,e keeping an infinite number of potential of proofs around, because the recomputer will look for another IN proof on demand. The user’s code can then always assume the invariant “If inequality is OIJT then there is no reason to believe it.” The reason to put storage reclamation at such a low priority level is to allow “experimenting” with the data base. The user may wish to try various hypotheses before adopting a final one. During this experimentation, ddnodes may temporarily become “dead,” that is, OUT a,nd not supporting anything. By “locking” the low-priority queue, we can postpone garb age-collecting these nodes u&i1 they are worthless for certain. 3. Resnlts sncl Conelusions The code for this system has been written, and is in the process of being debugged. It is written in NISP, a portable JLJSP macro pacJ;age that runs in ZetaLisp, ILJSP, Franz, and T. For now, the most likely host dialect is T, a Sclacmc-like Lisp that runs on Apollos and Vaxes. Data-dependency notes have a mixed reputation. h4ost people appreciate the need for a flexible method of updating a data base. Jlowever, they have trouble seeing exactly how deprndency notes meet this need. Some of the literature (e.g., [Doyle 79, deKleer 771) seems to imply that data dependencies can, by making and retracting assumptions, take over most of the control structure of a practical program. This is unlikely for all but t,he simplest applications. In practice, data dependencies are simply one tool among many, which perform two tasks quite well: erasing beliefs that are no longer justified, and queueing “signal functions” to rethink t,heir justifications. The conclusion of the present paper is that this ability extends naturally to beliefs involving numbers, specifically, beliefs about the intervals quantities lie in. ACKNO’l?LEDGEMENTS: Thanks to Rod McGuire for a lot of ideas. [Charniak 721 [deKleer 771 [Doyle 791 [Doyle SO] [McAllester 821 [Sussman 801 References Charniak, E. Towards a Model of Children ‘6 Story Comprehension. Technical Report 266, MIT Artificial Intelligence Lab, 1972. de Kleer, Johan, Doyle, Jon, Steele, Guy L., and Sussman, Gerald J. Explicit control of reasoning. Memo 427, MIT AI Laboratory, 1977. Also in Proc. Con f. on AI and Prog. Lang. Rochester, which appeared as IsIGART Newsletter no. 64, pp. 116-125. Doyle, J. A truth maintenance system. Artificial Intelligence 12:231-272, 1979. Doyle, Jon. A model for deliberation, action, and introspection. TR 581, MIT AI Laboratory, 1980. McAllester, David. Reasoning Utility Package User's Manual, Version One. Memo 667, MIT AI Laboratory, 1982. Sussman, G.J. and Steele, G.L. CONSTRAINTS -- A Language for Expressing Almost-Hierarchical Descriptions. Artificial Intelligence 14( l):l-39, August,
1983
41
235
by William J. Long* Clinical Decision Making Group. Laboratory for Computer Science Massachusetts Institute of Technology, Cambridge, Massachusetts Abstract The reasoning needed for diagnosis and patient management in a medical domain requires the ability to determine both the aspects of ttle patient state that are definitely known and those that are possible given what is known about the patient. This paper discusses a mechanism for including the time constraints of causal relationships in the representation and the increased discriminatory power of the reasoning mechanisms when the time relationships are used appropriately. It is further argued that in such a domain where time bounds are weak there is often more information in the relationships between times than in the time values themselves. Thus, it is often necessary to reason from the relationships rather than by comparing time values. A program utilizing the mechanisms outlined is currently under development. Introduction The issues of time, causation, and change are central to reasoning in many domains. Since processes take place over time, accounting for the relationships of the changes is essential for the understanding of the processes. The problems of causation and change have been encountered in such expert systems domains as geology, molecular genetics, naive physics, and medicine [Simmons, 1982, Stefik, 1981, Forbus, 1982, Patil ef al., 19811. While considerable research has been focused on causation, the subject has not been exhausted. Other research has been directed specifically toward representing and reasoning about time relationships (see for example [Allen, 1981, Kahn and Gorry, 1977, McDermott 19811). These efforts have been concerned primarily with answering questions about a history of events, including uncertain time bounds [Kahn and Gorry, 19771 and future events [McDermott 19811. None of these efforts have looked at the problem of using knowledge about classes of events to answer questions about what events might or must have taken place, especially when there are interactions among processes. The medical domain offers a particular challenge because the patient state has to be inferred from the data, requiring knowledge of the processes involved. Older programs such as Internist ’ 1 hts investigation was supported in part by National Library of Medicine Grant No. 5 PO1 LM03374-04. in part by the Whrtaker Health Sciences Fund, and in part by BRSG SO7 RR 07047-17, awarded by the Biomedical Research Support Grant, Dlvislon of Research Resources, NatIonal lnstltutes of Health. [Ponle. 13771 and Fli’ [Pauker FI ;I/ . 197G] have avoided this issue by treating the possible presentations probabilistically. HGWeVer, such a methodology neglects time information that may make it possible to discriminate between competing states. This is especially true m programs that reason about patients during the treatment process when the state of the patient is changing. An example from cardiology will illustrate the timmg relationships that can arise**. If a patient has heart failure caused by a weakened cardiac muscle, the low cardiac output causes water retention which increases the blood volume. The high blood volume can cause edema to develop (abnormal fluid accumulation in the lungs, legs, and possibly other areas). If a diuretic is given to clear the excess fluid, it acts by decreasing the blood volume. As the blood volume decreases, the edematous fluid returns to the circulation. However, it may take time to mobilize the fluid, especially in the legs, and if the diuretic reduces the blood volume more than is appropriate, the low blood volume may cause a further reduction of the cardiac output. Not all of these processes are instantaneous. In fact there is wide variation in the amount of time taken. If the patient has heat-t failure or low blood volume, the immediate consequence is low cardiac output. If there is water retention, it will still be a matter of days before the patient has taken in enough fluid for the blood volume to be high. If the blood volume is high - say it happened rapidly from excessive fluid therapy - it would still take at least a few hours for edema to develop. Conversely, once edema has developed it may take days for the fluid to return to the blood stream even though the blood volume is no longer high. The diuretic also takes hours to remove the excess fluid from the blood volume. Because of these time relationships, there are certain combinations of states that simply could not occur. For example, edema would not exist unless heart failure had existed for days or the pattent had been given fluid therapy at least hours earlier. However, it is still possible for edema to be present in the patient at the same time blood volume is low if diuretics had been administered hours earlier. ** The examples used should not be taken to represent medical reality since there are many factors not Included and the relationships of the factors included are greatly simpllflcd, but the classes of relabonshlps represented are characteristic of the larger problem and are suffloent to present the issues. 251 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Representation of State, Time, and Causation The appropriate representation for a problem depends on the kinds of reasoning to be done and the characteristics of the domain. The reasoning we are concerned with is found in both diagnosis and patient management - we need to determine what is known about the patient state from the available information, including what is definite and what is uncertain but possible. It is also useful to assign likelihood measures to possible conclusions, but that is beyond the scope of this paper - the determination of possible and definite in this paper can be viewed as a controlled framework in which a likelihood mechanism could operate. In medicine the meanings of measurements of the patient state tend to vary from patient to patient. While “normal” ranges are identified for many parameters, they are only confidence intervals for the population. The important determination is the relationship between parameters. That is, a parameter is abnormal because it has an effect on something else. Physicians tend to express this by using qualitative measures for parameter values (e.g., low cardiac output) even when quantitative measures are available. Thus, we have chosen to represent parameter states qualitatively with the values defined in terms of other parameters. For example, blood volume may be low, normal, or high. A low blood volume is a blood volume that will result in low cardiac output, whatever the value might be for an individual patient. Defining the qualitative values in this way means that it is usually not possible to associate exact numbers with the values - there are no universally valid “boiling points” as in physical systems [Forbus, 19821. This places more burden on the interpretation of patient measurements, but simplifies the reasoning problem. Similarly, time relations in medicine tend to be inexact. For example, there is no exact time when low cardiac output started. Although one can say it has been present for days but not weeks, more precision is impossible. Expressing the templates for causality also requires that the range of possible time delays between cause and effect be represented. The bounds of these ranges are likewise difficult to express in exact terms - it takes from hours to days to mobilize edema. Rather than require artificially exact bounds to be specified, we use qualitative fimes related by a partial ordering. Thus the time bounds of a relation can be viewed as a qualitatively specified confidence interval. Causation in the medical domain has other characteristics. 1) The factors influencing a particular parameter are limited and each can be suitably represented. Thus, it is appropriate to reason in terms of a closed domain of possible influences. 2) Some of the cause-effect relations are only probabilistic at the present state of medical knowledge. These will be represented as possible, without dealing with the likelihood. 3) The domain is also dominated by stable feedback systems. As a result, it is computationally more reasonable to accept the tendency of the systems to return to stable states as given and only represent the influences on the abnormal states, Thus, the representation of potential influences on an abnormal parameter value include the possible causes for the value with the time relationships between cause and effect, the possible corrections for the value with those time relations, and the time requirements for the parameter to return to normal in the :\lil,t?llCFf of causes or correctrons The causer, and corrections can further be drvrdcd into those that make the state possible and those that will definitely result in the state. This yields the following template for representing the causes of a state: P+ : causal conditions that make the state possible D+ : causal conditions that make the state definite P- : corrective conditions that possibly stop the state D- : corrective conditions that definitely stop the state relax : time range for the state to end after the cause ends The causes for high blood volume within the simplified domain are represented as follows: P+ retain water (tl), fluid therapy (0) D+ retain water (t2) P- no fluid therapy and lose water (t3) D- no fluid therapy and lose water (t4) relax (t5 t6) ; minimum and maximum relaxation times Retaining water for at least time tl could cause high blood volume as could fluid therapy for any amount of time. Retaining water for time t2 would definitely cause high blood volume. If the patient stops retaining water and does not receive fluid therapy, the blood volume could return to normal possibly by time t5 and certainly will by t6. Actively losing water will hasten this process, restoring normal blood volume possibly by time t3 and certainly will by t4. This representation of cause extends the notion of continuous causalion and thresholds to include the temporal relationships between cause and effect [Reiger and Grinberg, 19771. The resulting representation is sufficient to represent the properties needed for answering questions about the patient state. Many rules about causation are general. For example, all abnormal states have causes; the effect depends on the sum of the influences; and causes must start prior to or simultaneously with the effect. In this domain there are additional properties useful as rules for reasoning: 1) The cause and the effect must overlap, i.e., the cause can not end before the effect starts. 2) The state changes must be from the adjacent states, e.g., for retaining water to cause high blood volume the blood volume must be normal already. Since the causes usually also correct the opposing abnormal state, this is useful to keep the time bounds more precise where possible. 3) Causation once started continues until there is some change either in the cause or the corrective influences. Thus, if high blood volume were causing edema, it would continue to cause edema until the cause were changed, even though high blood volume does not necessarily cause edema (in this model). (In the actual domain it is also necessary to represent precipitating factors - factors that can mean the difference between causing and not causing the effect, but are incapable of causing the effect themselves. The extension follows without difficulty.) An Example The causal relationships for the example are sketched in figure 1 (without the corrections or time relations). Given the corresponding relationships represented by the formalism, the program is able to determine what is definitely known about the patient by propagating the minimum conclusions from the causal relationships. If the patient presently has edema and has not had 252 possible cause definite cause possible correction (not all shown) Figure 1. Example Causal Relations Gl 9 8 7 6 5 4 321 past 0 now future Figure 3. Edema and Low Blood Volume txcscnt cdcma water retention * low cardiac output The other causes have no delay and are therefore simultaneous with the water retention. low cardiac output * heart failure V low blood volume present / weak heart \ / 10 9 * 7 6 5 4 321 0 past now fixure Figure 2. Edema without Fluid Therapy fluid therapy, the conclusions (somewhat overconstrained by the diagram) are illustrated in figure 2 and deduced as follows: edema * high blood volume High blood volume is the only available cause. It must exist or have existed to account for the edema present. The time from 0 to 6 (to-6) in the figure is the maximum relaxation time for the edema. That is, the high blood volume had to be true at least that recently or the edema would definitely have disappeared by now. The difference between times 8 and 9 (t8-9) is the minimum time for which high blood volume must persist to cause edema. It is not specified when the edema started, but the high blood volume must have started at least that long before the start of the edema. high blood volume A fluid therapy: not given * water retention The water retention must not have ended more than t6-7 prior to the end of the high blood volume and must have started at least t9-10 prior to the beginning of the high blood volume. Since to cause high blood volume the blood volume must be normal at least for a time, normal blood volume must exist at least t9-10 prior to the high blood volume. (Normal blood volume and retaining water do not have to start at the same time. They just have the same minimal time relationship to high blood volume.) However, low blood volume is not consistent with the normal and high blood volume that must overlap the required time period. heart failure * weak heart The relaxation time for a weak heart is infinite. Thus while the weak heart is deduced true from t7 to t10 to account for the heart failure, once it exists it must remain. If we assume that nothing is known about therapy in the patient other than the lack of fluid therapy, then nothing more can be concluded. The result of this deduction is a representation of the set of facts that must hold given the edema. The example has been extended in figure 3 to show that it is possible for the patient to have low blood volume even though edema is present. If we assume the patient has not received digitalis (which could correct the heart failure), the heart failure and low cardiac output must have continued until time now because the cause has not changed. low blood volume = water loss The patient must have been losing water at least until time tl ago, having started losing water and having had a normal blood volume at least t3-4 prior. water loss * diuretic effect diuretic effect * diuretic The program can determine a specific time interval during which diuretic therapy would have to have been given for there to be low blood volume now. The more common deduction of the possible effects from known times of therapy could also be made. 253 For the low blood volume to be consistent with the edema, the end of the high blood volume must be before the start of normal blood volume (and likewise for water retention and loss). Therefore since the maximum time for the edema to be removed is longer than the minimum time that water loss takes to cause low blood volume, it is possible for both to exist simultaneously. Discussion There are three observations to be made about the deductions facilitated by the mechanisms presented. First, the mechanisms are capable of discrimination that would be impossible without the use of the time relationships. The ability to filter out the impossible from among the possible contingencies allows a program to have more focused reasoning about the legitimate possibilities. Second, the mechanisms permit reasoning both about the definite and the possible. Thus, the same mechanism is useful for determining what is known about the patient state and deducing the possible effects of therapy. Finally, let us consider a little more deeply the reasoning that eliminated low blood volume as a cause of the low cardiac output. All that is known about the time of the high blood volume is that it must have begun at least t8-9 (say hours) ago but could have begun and ended to-6 (say days) ago. Similarly, the cause of the low cardiac output began at least t8-9 + t9-10 (hours) ago but could have begun and ended to-6 + t6-7 (days) ago. It is impossible to conclude that the low blood volume is incompatible with the high blood volume by comparing these ranges. The important information is the temporal relationship between cause and effect. The cause for the low cardiac output must continue until the low cardiac output ends, which is when the retaining of water ends, which is necessarily after the blood volume is high. Thus, any representation of the times of these states must preserve both the constraints on the times and the relationships between the states to make all of the reasoning possible. Acknowledgements The work reported here is part of the development of a program to assist the physician in the diagnosis and management of heart failure [Long et al., 19821. The ideas are presented as separate from that work, but the results will become a part of that work as it progresses. For an understanding of some of the problems and issues confronting the physician, I thank our collaborators on that project, Drs. Criscitiello, Naimi, and Pauker of New England Medical Center Hospital. PI PI PI PI Fl PI 171 I?31 191 References Allen, J. F., “Maintaining Knowledge about Temporal Intervals,” University of Rochester Deparfment of Computer Science TR 86, January 1981. Forbus, K. D., “Qualitative Process Theory,” M/T A/-Memo 664, February 1982. Kahn, K. and Gorry, G. A., “Mechanizing Temporal Knowledge,” Artificial lnfelligence 9, (1977) 87-108. Long, W., Naimi, S.. and Criscitiello, M. G., “A Knowledge Representation for Reasoning about the Management of Heart Failure,” Computers in Cardiology Conference, October 1982. McDermott, D., “A Temporal Logic for Reasoning About Processes and Plans,” Yale University Department of Computer Science Research Report 196, March 1981. Patil, R. S., Szolovits, P., and Schwartz, W. B., “Causal Understanding of Patient Illness in Medical Diagnosis,” Proceedings of IJCAI-7, August 1981, pp. 893-899. Pauker, S. G., Gorry, G. A., Kassirer, J. P., and Schwartz, W. B., “Toward the Simulation of Clinical Cognition: Taking a Present Illness by Computer,” The American Journal of Medicine 60, (1976) 981-995. Pople, H. E., Jr., “The Formation of Composite Hypotheses in Diagnostic Problem Solving: an Exercise in Synthetic Reasoning,” Proceedings of IJCAI-5, August 1977, pp. 1030-1037. Reiger, C. and Grinberg, M., “The Declarative Representation and Procedural Simulation of Causality in Physical Mechanisms,” Proceedings of IJCAI-5, August 1977, pp. 250-256. [lo] Summons, R. G., “Spatial and Temporal Reasoning in Geologic Map Interpretation,” Proceedings of AAAI-82, August 1982, pp. 152-154. [l l] Stefik, M., “Planning with Constraints (MOLGEN: Part I),” Artificial lnfelligence 76, (1981) 11 l-140.
1983
42
236
THE DENOTATIONAL SEMANTICS OF HORN CLAUSES AS A PRODUCTION SYSTEM J-L. Lassez and M. Maher Dept. of Computer Science University of Melbourne Parkville, Victoria, 3052 Australia. ABSTRACT We show how one of Nilsson's tenets on rule-based production systems, when applied to Horn clause programs, leads to a denotational semantics. This formalism, in turn provides a striking illustra- tion of a second Nilsson tenet. I PRELIMINARIES The three properties of a denotational seman- tics CMcGettrick, 1980; Tennent 1981) that we con- sider here are the following: 1. It is a functional semantics, that is the meaning of a segment of program S is a function denoted BSII over a set of states. 2. The definitions of these semantic functions are structured in such a way that the meaning of any composite phrase is expressed in terms of the meanings of its immediate constituents. For exam- ple, in many conventional programming languages as, ;s2n = as2n0as,n. 3. The function assigned to a recursive defini- tion is defined as the Least fixedpoint of a sui- table operator. We will consider here two of Nilsson's tenets from his "Principles of Artificial Intelligence" (Nilsson, 1982). The first tenet is fundamental for our purpose, allowing us to define a Horn clause program as consisting only of a set of rules, the facts becoming the input. "Wffs representing assertional knowledge about the problem are separated into two categories: rules and facts. The rules consist of those assertions given in implicational form. Typically they express qeneral knowledge about a parti- cular subject area and are used as pro- duction rules. The facts are the asser- tions that are not expressed as implica- tions. Typically they represent specific knowledge relevant to a particular case.” * This research supported by the Australian Compu- ter Research Board The second decomposi tion: tenet links commutativity and "Under certain conditions the order in which a set of applicable rules is applied to a database is unimportant. When these conditions are satisfied, a production system improves its effi- ciency by avoiding needless expansion of redundant solution paths." Such condi- tions may be commutativity or decomposa- bility. Furthermore, Nilsson shows that in some cases there are relationships between commutative production systems and decomposable ones. The concept of decomposition that will be used here is the splitting of the set of rules (instead of Nilsson's splitting of the initial database>, in such a way that the two resulting programs can be run separately. II DENOTATIONAL SEMANTICS OF HORN CLAUSES Horn clause programs, which form the theore- tical basis for PROLOG programs (Kowalski, 19791, can be defined as a finite set of definite clauses. A definite clause is of the form A + B,,...,B n where A, B,,, . . . . n > 0. Bn are atomic formulae and where Example: pathCA,B) + pathCB,C) + pathCD,E) C pathCx,y) t path(y,x) pathCx,z) + path(x,y>, pathCy,z) The usual semantics (van Emden and Kowalski, 1976) of this program is given as a set: the set of all paths in the graph which is defined by the first three clauses. However it is clear that the nature of these three clauses differs from that of the remaining two - the latter express general properties (symmetry and transitivity> whereas the former specify one particular graph. Hence it is 229 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. natural to separate the two types of clauses: the rules (those clauses in implicational form) and the others which represent the initial database of knowledge. In the remainder of this paper we con- sider a program to be the conjunction of rules in a set R, which takes as input an initial database. To a set R of rules we associate a function [fRll which maps a set of facts into the set of facts which can be immediately deduced from the first set by a single application of a rule in R. Thus A E ITRD(I> iff there exists a clause B. f B,,...,B, in R and a substitution 8 such that A = Bo6 and {B+..,B,~} G I. Informally the semantics of the program P is the function IfPll which maps an initial database I into the set of all facts which can be deduced from this database by repeated application of the rules in R. So we have am(I) = J CERll+Id)kCI) (*I k=O where Id is the identity function and Cf+g)(X) = f(X) u g(X). For simplicity of expression we will write this as aPD(I) = CaRR+Id)WCI> Equivalently, CPII can be defined as the least fix- edpoint of the functional T by the following pro- position (Lassez and Maher, 1983). Proposition: IfPB is the least fixedpoint of the operator T where r(f) = ERll o f + Id It is clear that properties 1 and 3 of deno- tational semantics are satisfied by this defini- tion. The second property, which gives the seman- tics of the whole program in terms of the seman- tics of its components (in this case rules), is given by the following theorem. This theorem, which is a variant of a theorem of (Tarski, 1955>, is proved in (Lassez and Maher, 1983). Let R,,---t Rn be the individual rules of R with the corresponding programs P,,...,P,. 1: Theorem aPn = a f?, A ._. h Rn n = (aP,n 0 . . . 0 aPpw = ((CR,D+IdJW o ._- o UCRnll+Id)U)W I t can be shown (Lassez and Maher, 1983) that these definitions and results ar e compa tible with the semantics of (van Emden and Kowalski, 1976). If the R.'s represent sets of rules, rather than individua I rules, then this theorem is still valid, provided every rule in R is contained in some Ri. In (*> and in (van Emden and Kowalski, 1976) the rules are applied in parallel to the database. This parallelism is not required by the informal semantics. Theorem '1 shows that we can generate the same set of facts by applying the rules in a completely different way. In fact Theorem 1 can be generalized to show that the order of application of the rules is irrelevant, provided a condition of fairness is met, which corresponds more closely to the informal semantics (Lassez and Maher, 1983). III COMMUTATIVITY AND DECOMPOSITION The following theorem establishes that if the order of application of the two (not necessarily disjoint) sets of rules R,, and R, does not matter v they can even be applie d in pat-al lel - then ogram P can be d ivided into two subprograms the p1 and P 2 such that am = EPln + ap,n. In this case L SLD reso lution (Apt and van used in parallel for P,, and Emden P 3fl L , 198ZJ can be the two search spaces involved being in general far smaller than the whole search space for P, the construction of which necessitates combinations of rules from both programs. Theorem 2- - -- If CER,lI+Id) o (ER211+ Id) = CER$l+Id) o (ER,,lI+Id) = IIRD + Id then aPD = ltP14 + BP,4 i If we consider closures of rules (i.e. lIP,lI and IfP211 instead of ER,ll and then we have a condition which is both for P to be decomposable necessary and sufficient Theorem 3: am = ap,n + ap2n iff ap,n 0 ap,n = aP2n o arp = ayn t ap2n L These theorems are related to classical results of Ore on closure operators. The proofs 230 can be found in (Lassez and Maher, 1983). REFERENCES Examples: Consider R = ( N(x) + N(S(x)), N(S(x)) + N(x) } where the Herbrand Base HB is ( NC S"(O) ) : n=O,l ,... } We call S the successor function. One verifies easily that CltR,D+Id) o CCR23+Id) = C6R2D+Id> o CBR,D+Id) = CRDtId. Therefore one can break the connection between the two rules and perform SLD resolution on each of them separately. That is for a given A E HB we look simultaneously in parallel for successors only and for predeces- sors only. Without this split, SLD resolution would search alternatively for predecessors and successors creating a search space "exponentially*' larger than the two preceding ones. The hypo that R 1 and theses of the theorems do not require R2 be disjoint sets of rules as we show in the next example. A database on military personnel is processed by the following program P. The data is of the type: CCX,Y) (that is X is in the same company as Y), AttCX,B052) (X sleeps in barrack 0521, Att(X,C.SMITH) (X’s commanding offi- cer is Captain Smith), . . . Each attribute is characteristic of a company and associated to all its members. C(x,y> + C(y,x) C(x,z) f C(x,yl, C(y,z) C(x,y) 6 AttCx,c), Att(y,c) AttCx,c) + C(x,y), AttCy,c) This can be divided into two separate pro- grams which, this time, have overlapping sets of rules. P 1 is C(x,y) + C(y,xl C(x,z) + C(x,y), C(y,z) AttCx,c) + CCx,y), Att(y,c) and P 2 is C(x,y) + C(y,x) C(x,y) + AttCx,c), AttCy,c) Att(x,c) + C(x,y), AttCy,c) It can be verified that EPII = BPID + EP2D. Cl1 Apt, K. R. and M. H. van Emden, "Contribu- tions to the theory of logic programming*' J ACM 29:3 (1982) 841-862. - C21 van Emden, M. H. and R. A. Kowalski, "The semantics of predicate logic as a programming language" J-ACM 23:4 (1976) 733-742. C31 Kowalski, R. A. Loqic for Problem Solving North-Holland, 1979. C41 Lassez, J-L. and M. Maher, *‘Closures and fairness in the semantics of programming logic” Theor. Comput. Sci. (to appear). Also Technical Report 83/3, Dept. of Computer Science, University of Melbourne, 1983. C51 McGettrick, A. D. The Definition of Proqram- min q Lanquaqes. Cambridge University Press, 1980. C61 Nilsson, N. J. Principles of Artificial Intelligence. Springer Verlag, 1982. C71 Tarski, A. “A lattice-theoretical fixpoint theorem and its applications” Pacific J. Math 5 (19551, 285-309. d C81 Tennent, R. D. Principles of Proqramming Lanquaqes. Prentice-Hall, 1981. We now have two simpler programs which can be exe- cuted in parallel. That portion of the search space caused by the the unnecessary interactions between the second and third rules of P (via the remainder of the program) has been discarded. 231
1983
43
237
AN AUTOl’ARTIC ALGORlTklM DES IGNER: AN Ir”dlTIAL I~fL~~~?E~~3TAT10N1 Elaine Kant and Allien IdeweiI Department of Computer Science Carneyle~Mcllon University Pittsburgh, Pennsylvania 15213 ABSTRACT This paper outlines a specification for an algorithm-design system (based on previous work involving protocol analysis) and describes an implementation of the specification that is a combination frame and production system. In the implementation, design occurs in two problem spaces -- one about algorithms and one about the task-domain. The partially worked out algorithms are represented as configurations of data- flow components. A small number of general-purpose operators construct and modify the representations. These operators are adapted to different situations by instantiation and means-ends ana,lysis rules. The data-flow space also includes symbolic and test-case execution rules that drive the component-refinement orocess by exposing both problems and opportunities. A domain space about geometric images supports test,case execution, domain-specific problem solving, recognition and discovery. E. APPRGACHES TO DESIGN We are interested in systems that automatically design algorithms and create programs for them. The process of algoriihm design requires a significant degree of both knowledge and intelligence, and may even require additional structure to make creative discoveries. In this paper. we briefly present our specifications for a design system and describe an initial imp!ementation. The main goal of the research is to produce a successful automatic design system, but a secondary goal is to show that careful examination of human behavior can suggest novel and worthwhile system organizations. Related research on program synthesis and transformation, formal derivation, and automated discovery suggests some possible approaches to the automation problem. However, none provides a satisfactory paradigm for the whole task of automatic algorithm design. Program synthesis systems [l , 5, 131 often successively reRne programs by transformations. Most of these systems lack robustness because they require numerous transformation rules to specify all the details about programming and because they have no problem-solving abilities other than simple pattern matching on the ru!es. This may indicate the desirability of having a few general operators with more sophisticated techniques for adapting rules to situations. ‘This research is supported by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 3597, monitored by the Air Force Avionics Laboratory Under Coiitr3ct F3361581-K-1539 The views and conclusions con?ained I:) this document aie those of the authors and should not be interpreted as lepresentmg the offlclal policies, enher exptersed or implied, of the Defense AJvanced Research Projects Agency or the U.S. Government. Firrmal derivation systems typically do have a smail number of general operators. However, they requtre complete, formal speclficatior?s which may not be availzbl? initIalI\ [14]. arld they often apply ra::tier rigid methods to choose among applicable transformation rules. People still must specify the interesting lemmas or auxiliary definitions and decide among alternative axiom sets. Orle system [2] does address the choice problem by guessing recursive solutions tr> logical equations and verifying the guesses with a theorem prover and a small model. Discovery systems are quite rare, and those that exist are either open-ended exploration systems rather than problem-solving system3 that focus on a specific task [3. lo] or work in very narrow doma;ns such as algebraic models [9]. We propose an approach that includes relatively few basic operators, specific knowledge about instantiation rather than many transformation rules, and several problem-solving methods that work with domain knowledge. We also attempt a structure that permits discoveries. ii. A FUNCTIONAL SPECIF!CATION FOR P,Ed AbGQRITHt4-DESIGY”J SYSTEM The specifications for an algorithm-design system described here are based closely on the results of protocol analysis [6, 7].* We take algorithms about cornputl;tional geometry (for example, producing the convex huli of a set of points in the plane) as our tfrsl :ask domain because it forces us to consider the interplay between external geometric models, mental imagery, and computationally efficient serial algorithms. II. AlgcJrithm-design methods to be supported Design begins with the hypothesis of a kernei schema or solution plan and proceeds by successii/e refimzment. Two closely related methods of program execution guide the gradual refinement of tile initial schema. Symbol/c execu!ion uncovers i:lformaiion by running a partial algorithm description with symbols as data During execution, the algorithm and symbols are elaborated (new assertions and new compcr.ents are added). If s’imbolic execution is not sufficient to permit progress, a specific example is consiructed that permits detailed execution (tesr-case execution). In addition to driving the successive rclfinement, the two execution methods generate consequences and expose problems - 2 We analyzed protocols of human problnni-Lolvmn and design strategies in algori!hm-discovery tasks because a system orgar>izntlprl !:I,?: p:irallels humin reason!l’g lets us draw on human experrs for tr?cnnlourl’;. uro~r,ses to be fiexible and tobust. and has potential for learning. From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. and complications3 The test-case execution method allows the problem solver to generate examp!es that contain the right level of de?ail to make progress (the relevant knowledge must be evoked by concrete retrieval cues) while avoidmg comp!ications (such as boundary conditions) that are not important until the basic outline of 1l;e algorithm has been deveioped. However, test-case execution is more time consuming than symbolic execution because loops are executed for individual items, whereas symbolic execution is linear in the size of the algorithm structure. Thus, if the system understands a step well enough, it will execute it symbolically. Simple inferences may also be made during the execution process. Design includes problem solving and reasoning about the task domain. Examples are generating test cases, attempting to find counterexamples, and “proving” conjectures by demonstration (first on “typical” and later on boundary-condition examples). An extreme, but important, example of reasoning is discovery, the sudden viewing of a situation from a new perspective that presents an opportunity to solve an outstanding problem or make an improvement. What is perceived is not a solution to the main goal att the time the insight occurs; however, there is usually some preparation for the discovery -- some previously considered but unsolved problem. The new perspective often arises out of a novel and accidental alignment of objects or facts4 A general recognition-based control structure for all the methods seems to be the appropriate device to support the possibility of such juxtapositions. l3. A problem space for representing algorithms Much of the design process occurs within a single problem space of schematic structures that represent algorithms. A small number of basic types of components are instantiated and configured in multiple ways to represent algorithms. Expert design concepts (for example. schemas for divide and conquer and for dynamic programming) are built up from and coexist with the simoler vocabulary. Components may be augmented with assertions that give additional information such as an ordering for a generator or a predicate on a test. An assertion is not limited to concepts in the algorithm space. Fo: example, it can describe a relationship in the task space that must be satisfied. The assertions drive the decisions about how to refine a component into a new configuration of components. 3Cur analysis of human protocols showed that half of the design time was spent on symbok and test-case execution, and design time was proportional to the number of components in the algorithm description. Even when subjects had difficullies in designing the algorithms. their unsuccessful behavior had a similar character of hypothesizing new structures and repeatedly attempting to execute the partial algorithm on sample data. Symbolic and test-case execution also had major roles in verifying, describing and analyzing algorithms. 4 Our human subjects sometimes made discoveries “blankly” at a figure during test-case execution. while feeling lost and staring The problem space must include operators that refine partially specified structures (add new components, links between components, or assertions) and that carry out the execution methods. The refinement operators must not insist that configurations form complete or consistent algorithms. While most components will have a set of standard inputs and outputs to represect primary data relationships, operators can add new connections at any time without checking for type mismatches. Instead, problems noticed during test-case or symbolic execution will be handled as they arise. All the problem-space operators should be very general and will require instantiation before they are applied. Since kernel ideas, instantiation, and indeed all design decisions can be wrong, design involves search in the problem space. Most search-control knowledge should be provided by rules that select instantiations of the operatcrs that modify components and assertions and select parts of the algorithm to refine or execute. In the absence of specific knowledge for instantiation, other search-control rules must provide more general techniques, such as means-ends analysis, to help adapt an operator to a specific problem state. C. A problem space for the application-domain The application-domain space must support t&-case execution of algorithms. problem solving about the domain. and discovery. Test-case execution requires operations that produce specific examples (and judge their suitability) and operations that evaluate assertions (such as predicates on tests) in the domain space. Our initial task domain will be a problem space for manipulating geometric images. It includes objects, such as points, line segments. and polygons; specific geometric operators, such as create a point or construct a line segment; and general operators (perceive, find, partition) that are typically specialized in geometric ways (partition a point set). The operators help create test cases (such as sets of points) and check assertions (such as whether a point lies above the X-axis). Working within a geometric task domain imposes important (but largely unknown) constraints, as indicated by the facility with which humans prccess visual scenes and reason spatially. Our design for a geometric problem space is modeled upon the imagery simulation of Kosslyn and Shwartz [8]. It consists of a short-term image buffer (a planar array), with the image centered around a focal point of attention. External (perceived) and internal (imagined) images are coalesced into a single image. Several operators (rotate, pan, zoom and scan) change the focus by regenerating the buffer around the focus point, rather than shifting the center in a fixed space. A variable amollnt of detail is present. depending on point of view created by the movement operators. The image is viewed by a recognizer centered on the focus point, with the image memory permitting recognition of combined scenes. However, there are no operators that construct the image directly (write new patches of image into the buffer). Rather, all parts of the image are supported by the underlying semantic representation, and modifications are made to this underlying structure which is then pictured in the image. Discovery seems to occur through a recognition of some interesting configuration in the task space. Certain configurations of data items lead to recognition or automatic inferences (for example, completing polygons that are missing one edge, seeing polygons in regular patterns of points, being reminded of related algorithms). The image structure just described seems to be appropriate to facilitate this recognition. 178 III. AN IMPLEMENTATION OF AN ALGQRITHM-UESIGN SYSTEM We are implementing an algorithrn-design system that is both a protocol analysis/simulation system ar,d a fully automated design system. The basic structure is a combination frame and production system. It is implemented in L.isp and can run with or without the CPS5 production system [4] as the automated control structure. In the simulation mode, the user (rather than the production rules) makes the decisions about which design commands to invoke and how to instantiate them. Design commands include component, assertion. and item construction, symbolic and test- case execution, and goal creation and modification. The implementation also includes objects such as protocol phrases, episode groupings, and comments (all represented by frames), and it has operators that create and edit the objects and insert them into a history of design commands. The total history describes an actual protocol and its interpretation. The user interface inciudes facilities for graphic display of component configurations, for saving, restoring, and undoing history sequences, and for command completion (with prompts for arguments and access to help files). The automatic mode is an augmentation of the simulation mode. Normally, the production rules make instantiation and search- control decisions and a history of design steps only (no protocols) is maintained. However, the modes can be mixed and the user can take over or relinquish control in midstream. A. Representing algorithms with data-flow configurations Partially specified algorithms are represented as states in a data-f!ow problem space, DFS Each state is a data-flow configuration.” The algorithm steps are represented by process components. There are a small number of generic process components (a me.mory, and the flow-control components generate, tesi, and select) and a general apply component that can be specialized to domain operations such as make-line- segment. The inputs and outputs of the process components are represented by ports connected by links. Process components can be further specified by assertions. The components and assertions together modify and control the flow of items that represent data objects such as points and line segments. Items can fill several roles -- generic descriptions, symbols (for symbolic execution), and specific domain-space data. Assertions can be attached to any object, including an item, a process component, a whoI* conliguration of components, and a problem space. Figure 1 shows an example of a simple DFS configuration that, given 2 set of points, finds the subset that lies above (on the left side of) the X-axis. l-he input. {ij. is a memory containing a set of points. (A B C D), which are enumerated by Generate,. Those points that satisfy the predicate associated with Test, are added to the output set memory (0). In the figure, points A, B and C have been enumerated. A was discarded, B has been added to the output, and C has not yet been tested. 5 Data-flow representations have been studied in computer architecture research and have been used to describe artificial intelllgcnce programs [ll, 121 and to express algorithm transformations [15]. ii> contains point set {A B C D} 101 contains point set {B} {i}--------->Generaie, --_CCl--_>~est~C”e__--_~~~~~o~ Figure 1: A subset test. Components, links, ports, assertions, and items are all implemented by frame data structures chained in a tangled hierarchy (a directed acyclic graph). A ~cfinement of a component is represented by a slot containing the names of t!-le components in the configura?ion that implements it. For example, the TOP- LEVEL component frame has a slot called Components-list whose value is (MEMORY-l GENERATOR-l TEST-1 MEMORY-Z). TVJO frames that define the Test, component of Figure 1 ace shown in Figure i. Backpointers between related frames (such as the Component-of slot in TEST-l) are automatically maintained. In addition to the value and default facets shown in the figure, it-needed servant Functions and If-added/if-removed demons are provided via facets. Furthermore, all objects have a life time (creation and deletion dates) so that ditferent DFS states can be represented concisely, and the history of events is tree-structured to enable alternate design paths to be tried. Different inheritance mechanisms allow objtlcts to be viewed from different historical perspectives -- such as the current configuratron of “visible” objects, o: a “remembered” version of past states. The current knowledge base of thz system (before an algorithm is developed) includes the basic components described above, three types of ports (input, output, and signal-interrupt) and several named port instances for each component type. This algorithm-component knowledge base is fairly stable, although it is missing specialized expert schemata, such as divide and conquer. We do not expect it to grow to more than several times the current size. TEST t-filler t-filler value __----____ HISTORY-O COMPONENT TEST- IN TRUE-EXIT TEST-l .-_-- ___-- - I value _____---____ HISTORY-3 TEST TOP-LEVEL TEST-IN-l TRUE-EXIT-l ASSERTION-Z ____________ Figure 2: Frames for the test components. B. implementing component operators DFS operators that construct and refine the structures in an algorithm description accomplish simple but general taslts: adding, deleting, or changing the slot values of components and links. Other operators move items across links and execute components. A reasonably complete set of approximately fifty Lisp functions implement these operators. They are quite straightforward, and not many rnore appear to be needed. 179 The operators to construct the subset-test configuration of Figure 1 are given in Figure 3. The Ned-component operator adds a new instance of the specified component to the current configuration, but does not link it up in any way. The add-comaonent operator adds the new component specified in its first argument and links it to an existing output port, described by the second argument, using the input port on the new component described by the third argument. If an argument is missing (or is nil), defaults are used. In this case, the new components are linked to the most recently added component, using Default-filler ports for each component type. The process of instantiating these operators is discussed in section D. (new-component ‘memory) (add-componefv ‘generator) (add-component ‘test) (add-component ‘memory nil ‘add-elem) Figure 3: Constructing th.e subset test. C. Representing and testing assertions Assertions, which are also represented as frame objects (see Figure 4), can be attached to any type of object. Assertions can take values from the component inputs as parameters (for example. the TEST-IN port of the test above is a parameter in the assertion show below). There are currently about twenty-five assertions in several categories (for example: Booleans connectives, test-predicates, assertions about generator orderings, and references to geometry space operations), but assertions can denote anything (for example, that a configuration has not yet been refined to handle initialization). This set will grow as new algorithms are developed. DFS operators have been written to add, remove, search for, test the truth of, and find the consequences of assertions. For example, to make the subset algorithm in Figure 1 be a test for the points above the x-axis, we apply the following operator which results in a new assertion frame. (add-assertion ‘test-l) ‘(test-predicate (on-side test-in x-axis left)) slot __ -_ Crca Body AK0 Abou -___ Figure 4: An assertion frame. Consider the evaluation of this assertion for point C during the test-case execution of Figure 1. The system looks for the Truth slot of the assertion. Nothing is recorded there, but the assertion IS an insiance of a TEST-PREDICATE, which is in turn an instance of a BOOLEAN-ASSERTION, and there is an if-needed function that computes the truth value. During this computation, the value of the ON-SIDE subassertion is needed. If it has not been previously stored, it is calculated by calling a geometry space funciion that uses the coordinates of the point C. The result is then stored for future reference. D. Controlling search with ;4 production system Search is controlled prirnarily by produ:;tion I ul~s, v<ith mctjor goals explicitly r2presenttid in fralrlc? s!ruc?L:rcs and wilh default- fillers for slots to represent typical pert or asse:tlon types (z.g., to indicate that tests have test-predicate assertions). The goal structure is simple .- goais have types (e.g.. to refine a component or to execute a component) and can be in a number of states _- active. suspended (waiting for subgohls), sidelined (for lack of immedlLtte relevance or a way to prccecd;, sa:c:eedcd, failed, or cancelled. A hierarchy of subgoal reldlionships is maintained, but all goals are in working memory so the prirductions can notice whenever any goal is achieved, even if il is not active in the current state. Operators are instantiated by production rules. For frequently used generic components, specific instantiation rules doscribe which process coinponents to add to the algoritl-rm descrip!ion under different circumstances, defaults for how to link co,mponents together, and so on. Default-fillers are used by inst.?nti&tion rules in absence of more specific r&s. The current implementation has about 25 instantiation rules, and we anticipate that several orders of magnitude more will be needed. In the absence of specific knowledge, means-ends analysis rules find differences and select operators ;o reduce the differences. Production rules a!so describe how to execute components. The cur:ent implementation has approximately 20 rules that control means-ends analysis, voting on alternatives, and subgoaliny, and another IO that provide the control for symboiic and test-case execution and assertion checking. Specific Lisp functions associated with objects determine how to execute individual components and test assertions. As new specializations of components and assertions are added, corresponding functions and instantiation rules may need to be added, but the conZrol should not change. The design of the simple algorithm in Figure 1 is sketched in Figure 5. The first two production-rule applications decide to add a new component because the difference between the existing configuration and Ihe desired one is that nothing is present. The operator is instantiated to add a memory, because the givens of Ihe problem include a set of points. A new difference is selected (applicat:on 3), which is that the algorithm contains no active process component. The add-compone/G operator can solve this (4), and it is instantiated to a generator n!ter a vo!l!lg process (applications 9-l 7). Both generators and selectors can follow memories, but a generator is more likely because it creates a set rather than a singie item as output, and the desired output of the algorithm is a subset of the inpu!. The components are linked according to the default rules. The system tries to refine the new component, but other than deciding to use the default ordering assertion, no refinement is necessary at Ihis level (applications 41-59). Since an active process component is present, symbolic execution is applied (applications 62-89). Thus proceeds successfully until there is nothing to do with the output of the generator. A test is added (90-102), and attempts are made to refine it (103-146). The production rules find a test predicate by looking for assertions about the tyi>cs of items that are inputs to the test. Symbolic execution then continues (147-172) until the output of some component (the test) makcnes the description of elements in the algorithm output (both are poil;ts, both have matching assertions about position relative to the X-axis), and the results are collected in a final memory. The full derivation of the subset test include 3 1Wl fx 0duction.l ulr applications. Tile summary tlacc- lxzrc omits most of the appl~cal~on:; r&dbl tti t~ousekct-pin9 tasks, voting procedures talse psths. and me:115 ends iinnlys:s steps Operator apphcatlnns ale sl~o~n In square brackets. anJ comments ale In braces. {the top level goal is to refine algorithm Subset} 1. notico~~difCorence::empty-configuration 2. reduce-difference::smpty-configuration [new-component memory] (a memory component is added} 3. aotice-difference. .:no-active-procbss-component 4. reduce-difforence::no-fictive-process-component (it has been decided to apply the operator add-c@mpoPont} 5. jnstantiato:: add-component:from-port {the tram-port is instantiated according to default rules} 9. instantiate:: add-componont:generator-since-need-set 10. 11. 13. 14. 18. 41. 42. 59. 60. 62. 74. 78. O?. ae. 90. 95. 96. 97. 93. 99. 102. 127. 128 146. 147. 166. 170. 172. instantiota. .:add-componont:generator-follows-memory Instantiate::add-componen~:selezt-follows-memory rote {decide on cor,;ponant to instantiate add-component} instsntiate:: add-component:to-port iclstantiation-complete {a generator component is added} [add-component generator mem-out-l gen-in] *educe-differenca::no-assertions [the currant goal is to refine the generator} instantiate: :ado-assertion:default,s [assertion about default generator ordoring is added} Joal:: sideline-goal-with-no-reduce-djfference [cannot further refine generator, so sideline goal} Joal:: rosume-if-suspended-with-sidelined-subgoals [resume goal of refining whole algorithm} reduce-difference::symbolic-execution syi,ibolic-execution ::instantiate-inputs-succeeds symbolic-execution::set-up-for-refinements [find most ref!ned components} symbolic-execution::exacutable (symbolically execute memory component} symbolic-execution::executable (symbolically execute generator component} symbolic-execution::unconnected-link (notice generator output not connectad to a component) {decide to apply add-component opsrator} instantiate::add--com~onent:compare-follows-component instantiate:: remove-if-not-enough-inputs instanti8te::add-componont:test-follo~~s-active-component instantiato::add-component:apply-fo'lloras-active-compOnent instantiate::removo-if-not-enough-inputs {test is only component with the proper inputs} instantiate::add-component:to-port [add-component test gen-out-l test-in] {test added} reduce-differance::no-assertions {new goal is to refine test} instantiate::add-assertion:test-relevants {a relevant test predicate is fcund} [add-assertion (on-side test-in x-axis left) test] goal ::sideline-goal-with-no-reduce-difference (no way to refine test further} goal:: resume-if-suspended-with-sidelined-subgoals (resume goal of refining whole algorithm} symbolic-execution:: set-up-for-refinements (haven't executed test yet} symbolic-execution::executable {so execute test} symbolic-execution ::unconnscted-link-alg-complete (notice that output on link from test satisfies assertion on algorithm output. so add final memory component} [add-component memory true-exit add-elem] {algor!thm is now complete} Figure 5: Sketch of system’s design of algorithm in Figure 1. E. Implamenting the domain space V!e have implerr,ented a simple version of the geometry space, wit!! rrarne objects representing geometric objects (including points line segments and polygons). Lisp functions implement operators (to construct and modify the objects) and determine the relationships between objects (such as above, between, inside, and convex) that are used in assertions. General operators such as pe,,ceive or fi:?d an item with a given property are not yet imple,ncn:ed. Currently the system uses analytic geometry computations, and the imagery model is not yet implemented. IV. DISCUSSION Our initial implementa!ion is currently operational. In addition to consu-ucting :mail examples such as the subset test described here, we are using the convex hull algorithm (whose discovery has b?en worked out hy hand in our protocol-analysis research [6, 71) as a large test case. This example contains instances of a variety of critical issues for an algorithm-discovery system. There are both generaie and-iest and divide-and-conquer algorithms. The former was found initially and was used in part in ubtaining the latter, so interesting issues of knowledge transfer arise. Some genuine discoveries occur, providing good tests of whether our systcrn can recognize and capitalize on adventt:ious situations. The coupling between the algorithm space and the geometry space is continuous and intir,late, and this will test whether our proposed imagery scheme wil! have the desired properties. On the other hand, we have stiil to address the task of passing from the DFS representation of an algorithm to a program in some standard language. ACKNOWLEDGEMENTS We thank David Steier for helpful comments on this paper. Steier, Brigham Bell, and Edward Pervin are helping implement the system. [l] Bz!zer, R. “Transformationcll irnplementsticn: an exalnple.” IEEE Transactions on Software Engineering SE-7, 1 (January 1981). [‘2] Bibel, W. and Horning, K. M. LOPS - A System Based on a Strategical Appr oath to Program Synthesis. Proceedings of the Ir;terr;ational Workshop on Program Construction, France, September, 1980. 131 Davis, R. and Lenat, D. B.. Knowledge-based Systems in Artjiicial /nte///gence. McGraw-Hill, 1981. [4] Forgy, C. L. OF!% User’s Manual. Tech. Rept. CMU- CS-8’1-135, Carnegie-Meilon University, July, 1981. [5] Kant, E. and Barstow. D. R “The Refinement Paradigm: The Inierac!ion of Coding and Efficiency Knowledge in Program Synthesis.” IEEE Transactions on Software Engineering SE-7, 5 (September 1981), 458-471. ’ [Ci] Kant, E. and Newell, A. Naive algorithm design techniques: a case study. Proceedings of the European Conference on Artificial Intelligence, Orsay. France, July, 1982. [7] Kant, E. and Newell, A. Problem Solving for the Design of Algorithms. Tech. Rept. CMU.C%32- 145, Carnegie-Mellon University, November, 1982. [8] Kosslyn, S. M.. image and Mind. Harvard University Press, Cambridge, Massachusetts, 1980. [S] Langley, P. A., Bradshaw. G. L., and Simon, H. A. Bacon.5: the Discovery of Conservation Laws. Proceedings of IJCAI-81, 1981, pp. 121-126. [l Q] Lenat, D. B. “The Nature of Heuristics.” Artificial intelligence 19,2 (1982), 189-249. [I I] Moore, J. A. The Design and Evaluation of a Know/edge Net for L?Ef?L/N. Ph.D. Th., Carnegie-Mellon University, 1971. [12] Newell, A. Heuristic programming: III strl;ctured problems. In Progress in Operations Research, Aronofsky, J., Ed.,Wiley, 1369. pp. 360-414. [I 31 Rich, C. inspection Methods in Programming. Ph.D. Th., Massachusetts Institute of Technology, June 1980. [l 43 Swartotit, W., and Balzer. R. “On the Inevitable !ntertwining of Specification and lmplem%tation.” CACM 25, 7 (July 1982), 438-44G. [I 51 Tappel, S. Some Algorithm D &qn Mc?hods. Proceedings of the First Annual National Conferen& on Artificial Intelligence, August 18-21, 1980, pp. 64-67. 181
1983
44
238
Abstract Proving the Correctness of Digital Hardware Designs Harry G. Barrow Fairchild Laboratory for Artificial Intelligence Research 4961 Miranda Ave., Palo Alto, CA 94304 VERIFY is a PROLOG program that attempts to prove the correct- ness of a digital design. It does so by showing that the behavior inferred from the interconnection of its parts and their behaviors is equivalent to the specified behavior. It has successfully verified large designs in- volving many thousands of transistors. $1 Introduction When a hardware or software system is designed, there is always the problem of deciding whether the design meets its specification. Research in proving correctness of programs has continued for many years, with some notable recent successes [Schwartz 821. The correct- ness of hardware design, in contrast, has received comparatively little attention until recently, indicating perhaps that now the boundaries between hardware and software are becoming increasingly blurred. In this paper is described an approach to proving the correctness of digital hardware designs. It is based rather squarely upon Gordon’s methods, which concentrate on the denotational semantics of designs and their behavior [Gordon 811. A key principle is that, given the behaviors of components of a system and their interconnections, it is possible to derive a description of the behavior of the system [Foster 811. The derived behavior can then be compared with specifications. In our current work, behavior of systems or modules is specified in detail, rather than via general assertions (as in [Floyd 671). The most closely related work, apart from Gordon’s, is that of Shostak [Shostak 831. However, it would appear that the examples described in this paper represent the most complex so far verified automatically. VERIFY, a PROLOG program, has been implemented and embodies an initial, simplified version of Gordon’s methodology (specifically, omit- ting the notion of behaviors as manipulatable objects). VERIFY has successfully proved correctness of a straightforward but very detailed design, involving thousands of transistors. VERIFY gains most of its power from exploiting the structural hierarchy (part-whole), which effectively breaks the design into manage- able pieces with simply representable behavior (at some level of mean- ing). This decomposition results in proof complexity that is linear in the number of types of part, rather than the total number of parts. A second, related, source of power is the signal abstraction hierarchy. Signals in a digital system are viewed differently according to the con- text and conceptual level in which they are considered. At the lowest level, signals are considered as voltages and currents; at the next, as logic levels with various strengths; at the higher levels, as bits, integers, addresses, or even computational objects. The semantics of higher structural and signal levels allow greater leaps of inference, and much suppression of unnecessary detail. Finally, representing and manipulat- ing signals and functions symbolically leads to considerable improve- ments over the established methods of verifying designs by detailed simulation, involving vast numbers of specific signal values (zeros and ones). $2 Specifying a Module Modules are modeled as finite state machines. A module has a set of input ports and a set of output ports. Each port has an associated signal type, which specifies the domain for signals passing through it. A module also has a set of state variables, each with its own signal type. The behavior of a module is specified by two sets of equations: one set gives output signals as functions of inputs and current internal state, and the other gives new internal states as functions of inputs and current state. As an example, a simple type of module, called ‘inc’, which has no state variables, and whose output is simply its input plus one, may be declared as follows: I Definition of module type Incrementer module (inc) . port (inc, in (Aninc) port (inc, out (Aninc 1 input,integer). , output, integer) . outputEqn (inc, out (AnInc) := l+in(AnInc)) . In this definition, ‘AnInc’ is a PROLOG variable that stands for any instance of an incrementer, and ‘in’ and ‘out’ are functions from an instance to the current values of signals at its ports. ‘:=’ is an infix operator used to define the behavior of an output or state variable (on its left) as an expression (on its right). Similarly, a type of simple multiplexer, whose output is one of two inputs, according to the value of a control input, may be declared as: % Definition of module type Multiplexer module (mux) . port (mux, inO(Amux) ,input. integer). port (mux, in1 (Amux) , input, integer). port (mux, switch (Amux), input, boole) . port (mux, out (Amux) , output, integer) . outputEqn (mux, out (AMux) := if (switch (A&ix), in1 @MUX) , in0 (Mbux) 1 Here, ‘Amux’ stands for any instance of type mux, and ‘if(<condition> ,<true.expr>,<false.expr>)’ is a conditional expres- sion with the obvious semantics. Finally, a module type that involves an internal state variable: % Definition of module type Register module (reg) . 17 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. port (reg. in (Areg) , input, integer) . port (reg, out (Areg) , output, integer) . I reg I count z!i contents contents:=in. out:=contents. out I count:=if(ctrl,in,count+l). out:=count. I out Figure 1. A loadable counter. ntate(reg,contents (Areg) , integer). outputEqn (reg p out (ABeg) : = contents (ABeg) stateEqn (reg. contents (ABeg) := in(ABeg)). 1. We can, as follows: for example, specify a loadable counter, shown in Figure 1, % Definition of module type Counter module (counter). port (counter, in (Acounter) , input, integer) . port (counter, ctrl (Acounter) , input, boole) . port(counter,out(Acounter),output,integer). part (counter, muxA (Acomeer) ,mx) I part (counter, regA (Acounter) , reg) . part (counter, incA(Aconnter) , inc) . connected (counter, ctrl (Acounter), switch (muxA(Acounter)) > . connected (counter, in (Acounter) ,inl (auxA(Acounter))) . connected (counter, out (muxA(Acounter)) , in(regA (Acounter) 1) . connected (counter, out (regA(Acounter)) , in (incA (Acounter) 1) . connected (counter, out (incA(Acounter)) , in0 (muxA(Acounter))> connected (counter, out (regA (Acounter) 1, out (Acounter)) . g Behavior specification: state (counter, count (Acounter) , integer) . StateMap (counter, count (Acounter) .contente (regA(Acounter)) ) outputEqn (counter, out (Acounter) := count (Acounter)). StateEqn (counter, count (Acounter) := if (ctrl (Acounter), in (Acounter), count (Acounter) +l)) . The state variable, ‘count(Acounter)‘, declared in the behavior spec- ification above is for the purpose of describing the counter as a black box. It actually corresponds to the state variable of the register, ‘contents(regA(Acounter))‘, and the renaming is accomplished by the ‘StateMap’ declaration. The description language supports several additional useful con- structs: constants, parameters, part and port arrays, and bit-wise con- nections. Constants can be declared in a similar manner to state variables, specifying their name and type, and also their value. Connections can then be specified from the constants to input ports of parts. This approach has the advantage that constants are prominently visible in a description, and their values are not embedded in expressions, which makes editing much more reliable. Parameters may be provided to module declarations, so that a single description may cover several distinct types of module. For example, one parameter might indicate the width of inputs and outputs in bits, while another might be the value of a hard-wired constant. Arrays of parts, ports, state variables, and constants are easily rep- resented, adding a subscript to the arguments of selector functions. For example ‘in(AModule,I)’ refers to the Ith ‘in’ port of module ‘AModule’. Bit-wise connections are specified by use of a function ‘bit’, as in ‘bit(I,x)‘, which represents the selection of the Ith bit of a signal, 'X'. Parameters, constants, arrays, and bit-wise connections can be best understood by considering an example. In the following, an ‘adder’ is constructed from a collection of ‘fullAdder’s, each of which has inputs ‘inx’, ‘iny’, and &cin’ (the carry in), and outputs ‘sum’ and ‘carry’. # Definition of an (B+l)-bit adder in terma of fullAdders module (adder(B) 1. 18 port (adder(M) , inx(Adder) , input I integer (H) ‘B . port (adder (II), iny(Adder) , input, integer(H)) . port (adder (1) ,out (Adder), output, integer (El) ) -- 181 is ll+l. . constant (adder(g) , carryin (Adder) ,0, boole) . part(adder(l) ,fa(Adder,I) ,fullAdder) % an array of parts :- range(O,I,I). connected (adder (B) , bit (I, inx(Adder)) , inx(f a (Adder, I) )) *- range(O,I,IU. connkted(adder(l) ,bit(I,iny(Adder)) ,iny(fa(Adder,I))) :- range(0,I.B). connected(adder(l) ,carryin(Adder) ,cin(fa(Adder,O))) . connected(adder(B),carry(fa(Adder,I)),cin(fa(Adder,Il))) *- range(l.11.P). I is 11-l. connected(adder(P),sum(fa(Adder,I)),bit(I,out(Adder))) :- range(0, I,g>. connected (adder(B) ,carry(fa(Adder,B)), bit (Bl,out(Adder))) :- El1 is I+l. # Behavior specification: outputEqn (adder (8) ,out (Adder) := inxcbdder) +iny(Adder)) . In this example, ‘N’ is a parameter specifying the number of bits in the inputs to the adder (i.e. the most significant bit represents 2“‘). The adder is composed of N+l ‘fullAdder’s, specified in the ‘part’ declaration, and the construct ‘range(O,I,N)’ means simply that ‘I’ may take any value from ‘0’ to ‘N’. Individual bits of the inputs are connected to the ‘fullAdder’s, and their outputs are bit-wise-connected to the adder output. The equation describing the behavior of the adder is the specification, and is concerned only with input/output behavior, not reflecting any of the internal structure. To facilitate subsequent computations, module descriptions are con- verted from the “human-convenient”, parametrized notation above into a more explicit form. For example, the adder description might be con- verted into descriptions of 3- and 5bit adders, if these were required in the course of verification. The conversion is only done once, when needed, and the results are cached. $3 Inferring Behavior from Structure Given the behavior of its parts, and their interconnections, it is possible to infer the behavior of a composite module. For each of its output ports, we can trace back a connection to an output port of some part. That port has an equation describing its output as a function of the inputs (and state) of the part. The inputs can themselves be traced back to outputs of other parts, or to an input of the module, and so on. The state variables of parts are replaced by the equivalent state variables of the module. Thus we can construct an expression that describes the module output as a function of the module inputs and state variables (similar to the specification). This back-tracing process runs into trouble if there are loops in the interconnection of parts, but only if the loops are not broken by a state variable, as in the counter described above. For such cases, we need to introduce new state variables, but this is not done in the current implementation. (See [Shostak 831 for another approach to designs with loops.) Loops do not in practice seem to cause many problems: the Mead-Conway design methodology advocates breaking all loops with clocked dynamic storage elements, thereby conveniently providing the required state variables [Mead and Conway SO]. In constructing the Functionai expressions Bar outputs, B (pertain amount of checking is performed. It is possible to check tha6 no line is driven by two outputs simultaneously (unless they are tri-state out- puts), that every line has an output and an input, and that every out- put and state variable is accounted for. The signal types of connected outputs and inputs are checked for compatibility in the signal abstrac- tion hierarchy, and an appropriate type conversion is made where ap- propriate, or the user warned where it is not. A special form of ‘type- conversion’ occurs when bit-wise connections are made: integer outputs can be embedded in bit-selection functions, and bit inputs can be col- lected into integers as terms in a power series. In principle, the given behavior specification for a module also should be checked for errors. A simple type-check on the arguments of sub- expressions could be easily performed, but in practice, this has not yet been implemented. The behavior descriptions (whether given in the specification, or constructed from the structure) are useful for many purposes. Values can be supplied for input signals and state variables, and output values can be determined by the evaluation and simplification mechanism described later. It is not necessary to supply literal values for all variables: values can be left as variables. Thus we can simplify output expressions for such purposes as optimization, or specialization. We can also perform symbolic simulation, propagating symbols for variables which have definite, but unknown, values. For example, by symbolic simulation we can show that in the multiplexer primitive defined above, when ‘switch’ is true, the output is ‘inl’, regardless of the actual value of ‘inl’. This approach has the potential for much faster checking of a design by simulation than instantiating values for all variables, as is currently done. In this work, however, the major purpose of constructing behavior descriptions is so that they may be compared with the specified behavior and the design verified. $4 Verifying the design of a module When presented with a module to be verified, the system recursively verifies each of its parts, constructing a behavior description from its structure, and comparing it with the specified behavior. The behaviors are compared by considering the outputs, and state variables, in turn. Each output has two expressions that purport to describe its functional behavior. These are equated and an attempt is made to prove that the equation is an identity (i.e. holds for all possible values of inputs and states). The module is correct if this can be done for all outputs and state variables. In work on verification, Shostak has developed a decision procedure for quantifier-free Presburger arithmetic (no multiplication, except by constants) extended to include uninterpreted functions and predicates [Shostak 791. S h d UC a ecision procedure seems to be applicable in many cases of program verification, and could be a useful component of our design verification system, but has not been implemented. The current strategy for determining whether an equation is an identity involves a sequence of steps. First, the equation is checked to see whether it is a trivial identity, of the form ‘X=X’ or ‘true’, or whether it is a trivial non-identity, such as ‘a=b’, where ‘a’ and ‘b’ are constants. If the equation is not trivial, the size of the domain space of the equation is then determined by finding the cardinality of the domains of the input and state variables occurring in it. If the space size is sufficiently small (less than about 40 combinations of variable values), the equation is tested by enumeration: each combination of variable values is generated and substituted into the equation, which is then simplified to ‘true’ or ‘false’. This “brute-force” approach is surpris- ingly useful. Complex designs are usually composed of many simple pieces that are amenable to proof by enumeration, but which would re- 19 quire much additional mathematical knowledge tion to prove by other means. and alglbrair manipula.- When an equation is too complex for enumeration, an attempt is made to prove identity by symbolic manipulation. This strategy incorporates several operations: evaluation, simplification, expansion, canonicalization, and case analysis. Evaluation involves recursively considering subexpressions, such as ‘1+2’, to see whether they can be replaced by constants, such as ‘3’. Simplification is an extension of evaluation that can replace a sub- expression by a simpler one. For example, ‘and(true,x)’ is simplified to ‘x’. Evaluation and simplification are implemented by sets of transfor- mation rules that are recursively applied to expressions. Expansion occurs when a function is observed on one side only of the equation. The definition of the function is then substituted for its call by another rewriting rule. Bit-wise connection of signals is a special case: the observation of ‘bit(i,x)’ on one side of the equation can lead to the expansion of x as a power series of its bits on the other side. Bit-expansion is used fairly frequently in the examples tried so far. Canonicalization attempts to deal with certain combinatorial prob- lems of representation. In particular, it handles associativity and com- mutativity of functions (such as ‘+‘, ‘x’, ‘and’, and ‘or’) and their units and zeros. Nested expressions involving an associative function are flattened, units of the function are discarded, and any occurrence of a zero of the function causes the expression to be replaced by the zero. The terms in the lIattened expression are then lexically ordered, and the nested expression rebuilt. Other normalizations, such as put- ting logical expressions into a normal form, or pulling conditional ex- pressions to the outside, are accomplished by the rewrite rules during simplification. Case analysis is performed when the equation involves conditional expressions, which by this time are outermost in the expression. The analysis descends recursively through the conditionals, substituting truth for the condition in the true-expression, and falsity for it in the false-expression. For example, if the condition is ‘x=1’ then ‘1’ can be substituted for ‘x’ in the true-expression, and ‘false’ for ‘x=1’ in the false-expression. In its present form, VERIFY does not deal with all the ramifications of the conditional in both branches. Evaluation, simplification, expansion, canonicalization, and case analysis can be applied repeatedly until they no longer have any effect. (In practice some care has been taken in the implementation to apply them in ways that avoid large amounts of wasted effort.) If the resulting expression is trivial, identity will have been proved or disproved. If automatic symbolic manipulation does not manage to produce an answer, an interactive mode is entered. It was originally intended that the interactive mode would be the primary means of proof, with the user specifying the strategy (collect terms, substitute, simplify, etc.) and the system executing the tactics and doing the necessary bookkeeping. The interactive facility has not yet been implemented, partly because surprisingly much progress has been made using the automatic strategy described above ! Interaction is currently limited to responding to the system’s final, desperate question “Is this an identity?” with “yesn, “no”, or “unknown”, although development of an interactive prover is intended in the near future. $5 Results VERIFY is composed of about 2400 lines of PROLOG source code, resulting in 31000 words of compiled code. It has been tested on a small number of example cases. VERIFY can readily handle the example of the counter given above. During execution, it prints a large quantity of monitoring information on the terminal, so that the user can watch its recursive descent through a design. The most ambitious example that. has been attempt+,1 is the modgrle known as D74, decomposed in Figure 2. The example was taken from Genesereth [Genesereth 821, which addresses the problem of fault diag- nosis. However, our version is enormously more detailed. D74 contains, at the top level, three multipliers and two adders. The multipliers are composed of slices, each of which contains a multiplexer, a shifter, and an adder. The adders are built from full-adders, which are built from selectors, which are built from logic gates. The logic gates are them- selves described at two levels: an abstract boolean function level, and a level closer to an underlying NMOS transistor model, involving tri-state signals and stored charge. Enhancement and depletion mode transis- tors, along with joins and ground, are the primitives upon which the entire design rests. The design specification occupies about 430 lines of PROLOG data. D74 is parametrized in the number of bits in its input data. An in- stance that has three 8-bit inputs involves 50 different types of module, from 8-bit multipliers, 16bit adders, 15-bit adders, and so on, down to transistors, with 9 levels of structural hierarchy. There are 23,448 primitive parts, including 14,432 transistors. Verification of this design took 8 minutes and 15 seconds of cpu time on a DEC 2080 running compiled PROLOG. Of this time, about 2 minutes is spent in pretty-printing expressions for monitoring (which results in about 6000 lines of output). Slightly larger designs have been verified (over 18000 transistors) but the current implementation begins to run out of space at about this level. 36 Conclusion VERIFY is only a first attempt at implementing a design verification system that is founded on a clear mathematical model, and that can deal with designs of an interesting degree of complexity. The present version is tuned in a few places to handle some special cases, and much must be done before it can be made into a real design tool. However, hints of the potential of this approach, and particularly the virtues of structured designs, are already evident. Work for the immediate future includes trying the system on real production designs, implementing the interactive proof checker, and extending the representations and methodology to handle conveniently such modules as memories. More ambitious examples can then be tackled. References [Floyd 671 Floyd, R. W., “Assigning Meanings to Programs,” Proc. Amer. Math. Sot. Symp. in Applied Math., 19 (1967) 19-31. [Foster Sl] Foster, M. J., “Syntax-Directed Verification of Circuit Function,” in VLSI Systems and Computations, H. Kung, B. Sproull, and G. Steele (ed.), Computer Science Press, Carnegie- Mellon University, Pittsburgh, 1981, 203-212. [Genesereth 821 Genesereth, M. R., “Diagnosis Using Hierarchical Design Models,” in Proc. National Conference on Artificial Intelligence, AAAI-82, Pittsburgh, August, 1982. [Gordon 811 Gordon, M., “Two Papers on Modelling and Verifying Hardware,” in Proc. VLSI International Conference, Edinburgh, Scotland, August, 1981. [Mead and Conway 801 Mead, C., and L. Conway, VLSI Systems, Adison-Wesley , Philipines, 1980. Introduction to [Schwartz 821 Schwartz, R. L., and Melliar-Smith, P. M., “Formal Specification and Mechanical Verification of SIFT: A Fault-Tolerant 20 d74 iny inA inB inC Ilxzwke inx sumin mlx2 inx ctrl iny c outG ’ out.H adder iny J carry sum in out in source Figure 2. Structural decomposition of D74. Six of the modules have variants differing in the values of their parameters. Flight Control System,” SRI International, Msnlio !%rk CsIifornia TR CSL-133, January, 1982. [Shostak 821 Shostak, R. E., Schwartz, R. L., and Melliar-Smith, P. M., “STP: A Mechanized Logic for Specification and Verification,” in [Shostak 791 Shostak, R. E., “A Practical Decision Procedure for Proc. Sizth Con/. on Automated Deduction, Courant Institute, Arithmetic with Function Symbols,* Journal of the ACM 26, 2 New York, June, 1982. (April, 1979), 351-360. [Shostak 831 Shostak, R. E., “Verification of VLSI Designs,” in Proc. Third Caltech Conf. on VLSI, Computer Science Press, March 1983. 21
1983
45
239
COMMUNICATION AND INTE ULTI-AGENT PLANNING* Michael George ff Artificial Intelligence Center, SRI International, 3% Ravenswood Ave., Menlo Park, CA., 94025. *This research was supported in part by ONR contract N000014- 80-C-0296 and in part by AFOSR contract F49620-X-C-0188. dividual robots have already been constructed. The method also extends to single-agent planning, where one tries to achieve subgoals separately and defers decisions as to how these sub- plans will finally be interleaved (e.g., as in NOAH [Sacerdoti 771). Note that we are not concerned with some of the more problematic issues in multi-agent planning, such as questions of belief or persuasion (e.g., [Konolige 821). Similarly, the type of communication act that is involved is particularly simple, and provides no information other than synchronizing advice (cf. [Appelt 821). h4ost approaches to planning view actions (or events) as a mapping from an old situation into a new situation (e.g., [hlccarthy 68, Sacerdoti 771). However, in cases where mul- tiple agents can interact with one another, this approach fails to adequately represent some important features of actions and events (e.g., see [Allen 81, McDermott 821). For example, con- sider the action of tightening a nut with a spanner. To represent this action by the changes that it brings about misses the fact that a particular tool was utilized during the performance of the action. And, of course, this sort of information is critical in allocating resource usage and preventing conflicts. Wilkins [M’ilkins 821 recognized this problem with the STRIPS formula- tion, and extended the representation of actions to include a description of the resources used during the action. However, these resources are limited to being objects, and one cannot specify such properties of an action (or action instance) aa “I will always remain to the north of the building”, which might help other interaction. agents in planning to avoid a potentially harmful seqplen In this paper we show how representing actions as ces of stat es allows us to take account of both co-operative and harmful interactions between multiple agents. IVe assume that the duration of an action is not fixed, and that we can only know that an action has been completed by asking the agent that performed the action (or some observer of the action). This dors not mean to say that we cannot take into account the espectrcl duration times of actions, but rather that we are concerned with problems where this information is not suficieni for forming an adequate plan. For example, if some part in a machine fails, then knowing that delivery of a new part takes about 24 hours can help in planning the repair, but a good plan will probably want a local supervisor or agent to be notified on delivery of the part. 125 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. $2 Formalizing the Problem M-‘e consider an action to be a sequence SI, Sz, . . .S, of sets of states, intuitively those states over which the action takes place.* The domain of the action is the initial set of states S1, and the range is the final set of states S,. The intermediate sets of states S,, . . .S,- 1 are called the momenta of the action. .4 planning problem P consists of a set of states, S; a designated set of initial states, I, in 5’; a set of primitive actions, A, which can be performed by the various agents operating in the domain; and a set of goal states, G, in S. For any given planning problem, a single-agent [unconditional] plan P is a description of a sequence of actions al, a2, . . .a, from A such that i. ii. a1 is applicable contains I) to all initial states I (i.e., the domain of a1 for all i, 1 < i 5 n, the in the range of ai- action ai is applicable to all states . . . 111. a, achieves the goal G (i.e., the range of a, is contained in G). A multi-agent plan for a problem P is a collection of plans for subproblems of P which are synchronized to be applicable to all initial states I and to achieve the goal G. We will describe the problem domain using a predicate- calculus-like representation and assume that all actions satisfy the so-called “STRIPS assumption” [Nilsson 801. Under the STRIPS assumption, all conditions that cannot be proved [under some suitable restriction] to have changed by the performance of an action are assumed to remain unchanged. Further, we are only concerned with problems in which the components of a world state involving distinct agents are sufficiently decoupled to permit us to assume that the effects of actions of one agent are largely independent of any other. ,4lt hough violation of this restriction would not affect the validity of any solutions obtained, we would then be less certain of finding solutions, even if they existed. The representation of actions that we will use is a generalization of the standard STRIPS representation. Each action description contains a pre-condition and a post-condition, denoting the domain and range of the action. In addition, we need to represent what happens during the action. This is achieved by specifying an unorderedset of conditions to denote the moments (intermediate state sets) of the action. We will call these conditions the during conditions of the action. Again, under the STRIPS assumption, all other conditions are assumed to remain unchanged during the performance of the action, unless it can be proved otherwise. world For example, here is a possible description action that places one block on another: for the blocks puton bw) pre: holding(x) and clear(y) during: { holding(x) and clear(y) } post: clear(x) and hsndempty and an(x,y) *More generally, an action may be a set of such sequences. While this generalization can easily be accommodated within the formalism, it needlessly complicates our exposition. In the above problem domain, we could assume also that there was a static domain constraint [Rosenschein 821 say- ing that holding(x) always implies clear(x). $3 The Method Let us assume that, given a planning problem, we have decomposed the original goal into appropriate subgoals. \l’ithout loss of generality, we will only consider decomposi- tion into two subgoals. Also assume that we have separately generated plans for solving each of these subgoals (using some simple search technique, for example). Our problem now is to combine the two plans into a multi-agent plan that avoids conflicts and allows as many actions to proceed in parallel as possible. The first thing we have to work out is the manner in which individual actions may interact with one another. Then we need to determine which of the feasible situations are “unsafe” (i.e., could lead us into deadlock) and finally we need to insert synchronization primitives into the two subplans (single-agent plans) so that these unsafe situations can be avoided. 3.1 Interaction Analysis Our first task is to establish which situations occurring in the two single-agent plans are incompatible with one another. For example, if, in one single-agent plan, a situation occurs where block A is on top of block B, and, in the other single-agent plan, a situation occurs where block B is required to be clear, then these two situations are clearly incompatible. Similarly, if one agent expects a component of some assembly to be held by some other agent, and that other agent is not holding it, then the situations are again incompatible. We will now make this notion a little more precise. Consider two (single-agent) plans P and Q, and let p and q be some st,ate descriptions occurring at some point in the action sequences for P and Q, respectively. We will denote by <p,q> the situation (set of states) where both p and q hold. If p and q are contradictory (i.e., we can prove that p and q cannot both be true at the same time), then of course <p,q> will denote the empty set and we will say that <p,q> is unsatisfiable. Otherwise, we will say that <p,q> is aatiafiable. Now consider what happens when we try to execute actions in parallel. Let us begin by describing the sequence of state sets defining an action by a sequence of conditions. Then, given two actions a = pl,p2, . . .pm and b = q1 ,q2,. . .q,,, what can we say about the way they can be executed? Assume we are in some situation <pi, qj >. To estab- lish feasibility and safety, we need to know what are the possible successor situations. Say that, at this given instant, action a continues next, while action b remains at its current point of execution. Then, clearly, in the next situation pi+1 will hold. But will qj also hold in this new situation? In the general case, we would need to use the properties of the problem domain to determine what in fact does happen next. However, under the STRIPS assumption, we are guaranteed that qj holds in this new situation, provided <p;+t,qj > is satisfiable. Similarly, if action 6 proceeds before action a, then pi will continue to hold in the new situation, provided again that this new situa- tion is satisfiable. Thus the possible successors of the situation <pi,qj> are just <pi+lyqj> and Cpi,qj+l>- The STRIPS assumption is thus seen to be very im- port ant, because it allows us to determine the manner in which actions can be interleaved solely on the basis of satisfiability of the pairwise combination of the conditions defining the ac- t ions. If t hi5 were not the case, we would have to examine every possible interleaving of the actions, inferring as we went just what the successor situations were and whether or not they were satisfiable. Even without taking into account the cost of per- forming the necessary inferences, the complexity of this process is of order (n + m)!/(n! m!), compared with a complexity of order n X m if we make the STRIPS assumption (and thus need only examine all possible pairs of conditions). Furthermore, in the general case it would not be possible to specify the during conditions as an unordered set - we would have to specify the actual order in which these conditions occur during the perfor- mance of the action. This complicates the representation of actions and, in any case, may not be information that we can readily provide. 1Ve are now in a position to determine how actions as a whole can be safely executed. Consider two plans P = n1.a2,.. .a, and Q = b~,bz,.. .b,, and assume actions ai and bj are next to be executed. One possibility is that actions a; and bj can be ex- ecut cd in parallel. Because we have no control over the rates of the act ions, all interleavings of the actions must therefore be possible. tinder the STRIPS assumption, this will be the case if, and only if, all situations <p,q> are satisfiable, where p and q are any condition defining the actions oi and bj, respectively. Such actions will be said to commute. Alternatively, action ai could be executed while bj is suspcndcd (or vice versa). For this to be possible, we require that the preconclit ions of bj be satisfied on termination of some action that follows ai in plan P. We will in fact impose somewhat stronger restrictions than this, and require that the precondi- tions of bj be satisfied on termination of ai itself.* This amounts to assuming that the preconditions for one of the actions ap- pearing in one of the plans are unlikely to be achieved by the other plan (or that, in worlds where interactions are rare, so is serendipity). It is clear that, for actions satisfying the STRIPS assumption, and under the restriction given above, action ai can be executed while bj is suspended if, and only if, (1) the situa- tion consisting of the preconditions of both actions is satisfiable and (2) the situation consisting of the postcondition of ai and the precondition of bj is satisfiable. If actions ai and bj have this property, we will say that ai has precedence over 6,. Note that it, is possible for both actions to have preced- encc over each other, meaning that either can be executed while the other is suspended. Also, neither action may have precedence over the other, in which case neither can be executed. In the latter case, we will say that the actions conflict. *This is simply a restriction on the do/&ion8 we allow, and simplifies the analysis. The fact that one of the plans might fortuitously achieve the preconditions for one OF more actions in the other plan does not invalidate any solution we might obtain - it just means that the solution we obtain will not make constructive use of that fact. In problem domains that are best described by predi- cate calculus or some parameterized form of action description, the above conditions need to be determined for the inatancea of the actions that occur in the particular plans under considera- t,ion. However, in many cases these conditions can be established for the primitive actions, irrespective of the particular instance. For example, in the blocks world, handempty conflicts with holding(x), irrcsprctive of the value of x. Furthermore, one can often establish relatively simple isolation conditions under which classes of actions will or will not commut,e irrespective of the particular inst ancc. Thus although the deductions necessary for determining satisfaction of situations may be time consum- ing, much of the analysis can be done once on/y for any given problem domain. 3.2 Safety Analysis 14’~ can now use these properties to set up the safety conditions for individual actions. Consider two plans P = al,a~,...a, and Q = bl,b~,...b,. Let begin(a) denote the beginning of an action a and end(a) the termination of the action. Let the initial conditions of the plans P and Q be denoted by end(ao) and end(bo), respectively. For each pair of actions ai and bj occurring in P and Q we then have the following: i. If ni and is unsafe. bj do not commute, then <begin( begin( ii. If ai dots not have precedence over bj, then <begin( c?Zd( bj- I)> is unsafe. The set of all such teraction aef. unsafe situations is called the in- However, we still need to determine whether these un- safe situations give rise to other unsafe situations - that is, we must dctcrminc which of all the possible situations occurring in the execution of the plans P and Q could result in deadlock. The rules that govern the safety of a given situation 8 are as follows: i. If s = <begin(ai),begin(bj)>, then s is successor situations are unsafe. ii. If s = <liegin(ni),end(lij)>, then u is e?Zd( bj) > is unsafe. . . . 111. iv. If S = <end(ui),e?iff(bj)>, cessor situations are unsafe. then 8 is unsafe if both suc- unsafe if either unsafe if <end(ai), Togrf hrr with those situations occurring set, these are all the unsafe situations. in the interaction IJnfort unatcly, to use these rules to determine which of all feasible situations are unsafe requires the examination of all possible interleavings of the actions comprising the plans, and the complexity of this process increases exponentially with tbe number of actions involved. However, in the kinds of problem domain that we are considering, actions rarely interact with each other, and as a result long subsequences of actions often commute. The following theorem, which is not difficult to prove, allows us to make use of this fact. 127 Commutativity Theorem. Let al, a2,. . .a,,, be a [con8ecutive] Semantically, when a process reaches a communication subsequence of action8 in a plan P and bl,bZ,. . .b, be a aub- operation, it, waits for the corresponding process to reach the sequence of actions in a plan Q. If all the action8 ai, 1 5 d 5 m, matching communication operation. At that point the operation commute with the action8 bj, 1 <j 5 n, then all possible situa- is performed and both processes resume their execution. tiona occurring in all po88ible interleaving8 of these sequence8 will be unsafe if and only &f,, the dituationa <end(a,), begin( and <begin( end(b,)> are unaafe. Further, all situations occurring in all interleaving8 of these sequences will be safe if, and only 1% <end(a,), end(b,)> i8 safe. This theorem means that, if any two subsequences of actions commute with each other, then we need only consider those situations that occur on the “boundaries” of the sequences. Exactly what states within those boundaries are safe and unsafe depends only on the safety or otherwise of the boundary states, and this can be determined in a straightforward manner. As commutat,ivity is common when interactions are rare, this result, allows us to avoid the exploration of a very large number of interleavings and to substantially reduce the complexity of the problem. In particular, actions that commute with all actions in the other plan can simply be removed from consideration. The synchronization is achieved as folIows. At the beginning and end of each critical region R we set, up a com- munication command to a supervisor S, respectively S!begin- R and S!end-R. The supervisor then ensures that no critical regions are allowed to progress at the same time. Placing the communication commands in the original single-agent plans is clearly straightforward. So all we now have to do is construct the scheduler, which is a standard operating-systems problem. 3.4 Example We will consider an example where two robots are required to place some metal stock in a lathe, one making a bolt and the other a nut. Only one robot, can use the lathe at a time. We will now use these results as a basis for our method of safety analysis. Assume we have constructed two single-agent plans and have performed the interaction analysis. All refer- ences to actions that commute with the other plan in its entirety (i.e., which do not appear in the interaction set) are removed from the plans, and the beginning and termination points of the remaining actions are explicitly represented. We will say that the resulting plans are simplified. Then, beginning with the initial situation, the conditions of safety given above are ap- plied recursively to determine all situations that are feasible yet, unsafe. However, whenever we reach a situation where follow- ing subsequences of actions commute, we use the commutativity theorem to avoid the explicit exploration of all possible inter- leavings of these subsequences.* 3.3 InteracGon Resolution The set. of unsafe situations is next analyzed to identify contiguous sequences of unsafe situations. These represent criti- cal regions in the single-agent plans. Once these critical regions have been determined, standard operating-system methods can be used to enforce synchronization of the actions in the plans so that conflicting critical regions will not both be entered at, the same time. We will use CSP primitives [Hoare 19781 for handling this synchronization. A program in that formalism is a col- lection of sequential processes each of which can include in- terprocess communication operations. Syntactically, an inter- process communication operation names the source or destina- tion process and gives the information to be transmitted. In Hoare’s notation, the operation “send 8 to process P” is written P!s and the operation “receive 8 from process P” is P?s *In fact, the analysis of safety can be further simplified. These details need not, concern us. here, our intention being primarily to establish t be importance of the STRIPS assumption and the commutativity theorem to avoid a combinatorial explosion. We will not, formally provide the details of the actions and the problem domain, but, only sufficient to give the idea behind the analysis and the solution. The fact that the lathe can only be used by one robot at a time is represented as a static constraint on the problem domain. The actions are informally as follows: aim: agent 1 moves to the lathe a2m: agent 2 moves to the lathe alp: agent 1 places metal stock in a2p: agent 2 places metal stock in alb: agent 1 makes a bolt a2n: agent 2 makes a nut alf: agent 1 moves to end a2f: agent 2 moves to end lathe lathe The preconditions and during conditions for actions alb and a2n include the constraint that. the lathe must. be in the possession of the appropriate agent, as do the postconditions and during conditions for actions alp and a2p. Assume that single-agent, plans: a simple planner produces the following alm - alp - alb - alf a2m - a2p - a2n - a2f and The following be established precedence commutati vity properties can then i. actions alb and a2n conflict with one another ii. actions alp and a2p but do not commute. each have precedence over the other, . . . 111. action alb has precedence over a2p, but not vice versa. iv. action a2n has precedence over alp, but not vice versa. We now proceed to determine the unsafe interaction set is determined to be: situations. First., the < begin(alb),begin(a2n)> < begin(alb),end(aap)> <end(alp),begin(a2n)> < begin(alp),begin(a!Zp)> < begin(alb),begin(a2p)> <end(alp),begin(a2p)> < begin(alp),begin(a2n)> < begin(alp),end(a2p)> We next form the simplified solutions: begin(alp) - end(alp) - begin(alb) - end(alb) begin(a2p) -+ end(a2p) - begin(a2n) - end(a2n) Then we perform the safety analysis, which, in this case, returns the set of unsafe situations unchanged from the interaction set. On concatenating consecutive elements, we get only two critical regions: begin(alp) - end(alb) conflicts with begin(a2p) - end(a2n). Finally we insert CSP commands into the original plans: Solution for agent 1 (P) alm - S!begin(alp) - alp - alb - S?end(alb) - alf Solution for agent 2 (Q) a2m - S!begin(a2p) - a2p - a2n - S!end(a2n) - a2f Solution for the synchronizer (S)* [ not N ; P?begin(alp) - M := true 0 not M ; Q?begin(a2p) - N := true 0 true ; P?end(alb) - M := false 0 true ; Q?end(a2n) - N := false] Both M and N are initially set to “false”. The solution obtained is, of course, the obvious one. Both agents must advise the supervisor that they wish to put stock in the lathe, and can only proceed to do so when given permission. Both agents must also advise the supervisor when they have finished with the lathe. On his part, the supervisor makes sure that only one agent at a time is putting stock into the lathe and using it. Notice that the synchronizer allows any interleaving or parallel execution of the single-agent plans that does not lead to deadlock. Further, the synchronizer allows the plans to be continually executed, which is useful for production- line planning. Although the problem described above involved the avoidance of harmful interactions (mutual exclusion), the method can equally well be applied to problems that require co-operation between agents. The reason is that unless the actions are synch- ronized to provide the required co-operation, situations will arise which are unsatisfiable. For example, if two agents are required to co-operate to paint a block of wood, one holding the piece and the ot,her painting it, then any situation where one agent was painting the wood while the other was not holding it would be unsatisfiable. The multi-agent plan synthesizer described in this paper has been used to solve a number of tasks involving both co- operation and interaction avoidance. These problems include two arms working co-operatively to bolt subassemblies together, some typical blocks world problems requiring “non-linear” solu- tions, and various “readers and writers” problems. *The form “0 <guard> - <command>” is a guarded command (see [Hoare 78]), and the command fallowing the symbol U- n can only be executed if the execution of the guard (i.e. the boolean expression and the input command preceding “4 “) does not fail. $4 C:onclusions We have presented a simple and efficient technique for forming flexible multi-agent plans from simpler’single-agent plans. The critical features of the approach are that i. actions are represented as sequences of states, thus allowing the expression of more complex kinds of interaction than would be possible if simple state change operators were used, and ii. the STRIPS assumption and commutativity conditions are used to avoid the explicit generation of all possible inter- leavings of the actions comprising the plans, thus avoiding a combinatorial explosion. While the approach does not guarantee solutions to some classes of problem involving complex interactions between single-agent plans, it haa wide applicability in many real-world settings, such as in automated factories and co-operative robot assembly tasks. Future work will extend the formalism to in- clude conditional plans and hierarchical planning techniques. Acknowledgments The author wishes to thank Peter Cheeseman for his critical reading of this paper. References PI PI PI PI PI PI PI PI PI P-4 Allen, J.F., “A General Model of Action and Time”, University of Rochester, Comp. Sci. Report TR 97, 1981. Appelt, D. “Planning Natural Language Utterances”, in Research on Distributed Artificial Intelligence, Interim Re- port, AI Center, SRI International, Menlo Park, Ca., 1982. Hoare, C.A.R., ‘Communicating Sequential Processesn , Comm. ACM, Vol. 21, pp 666677, 1978. Konolige, K. “A First Order Formalization of Knowledge and Action for a Multiagent Planning System”, in Research on Distributed Artificial Intelligence, 1982. McCarthy, J., in Minsky (ed.) Programa with Common Sense, MIT Press, Cambridge, Mass., 1968. McDermott, D., “A Temporal Logic for Reasoning about Processes and Plans”, Yale University Comp. Sci. Research Report 190, 1981. Nilsson, N.J. Principlea 01 Artificial Intelligence, Tioga Press, Palo Alto, Ca., 1980. Rosenschein, S. “Plan Synthesis: A Logical Perspective” Multiagcnt Planning System”, Proc. IJCAI-81, Vancouver, Canada, pp. 331-337, 1981. Sacerdoti, E.D. A Structure for Plans and Behaviour, Elsevier, North Holland, New York, 1977. Wilkins, D.E., “Parallelism in Planning and Problem Solving: Reasoning about Resources”, Tech Note 258, AI Center, SRI International, Menlo Park, Ca., 1982. 129
1983
46
240
An Overview of Meta-Level Architecture Michael R. Genesereth Stanford University Computer Science Department Stanford, California 94305 Abstract: One of the biggest problems in AT programming is the difficulty of specifying control. Meta-level architecture is a knowledge engineering approach to coping with this difficulty. The key feature of the architecture is a declarative control language that allows one to write partial specifications of program behavior. This flexibility facilitates incremental system dcvclopment and the integration of disparate architectures like demons, object-oriented programming, and controlled deduction. This paper presents the language, describes an appropriate, and cliscusses the issues of compiling. It illustrales the architecture with a variety of examples and reports some experience in using the architecture in building expert systems. 1. Introduction The actions of most Artilicial Intelligence programs can be divided into distinct “base-level” and “meta-level” categories. Base-level actions are those which when strung together achieve the program’s goals. Meta-level actions are those involved in deciding which base-level actions to perform. t’or example, in the Blocks World, all physical movements of a robot arm would be base-level actions, atid all planning \vculd be meta-level. In a data base management system, data base accesses and modifications would be base-level actions, and :;ll query optimiTation would be mcta-level. In an autonralcd deduction system, inf2rencc steps would be base-level, and the activity of deciding the order in which to perform infcrencc steps would be meta-level. A good w:iy of keeping this distinction clear is to view the b:lse-le!~cl and mcta-!cvel components as separate agents, as suggested in figure 1. The base-lcvcl agent perceives and modifies a given environment. The meta-level agent operates in an enlarged environment that includes the base-level agent. Its goal is to observe the base-level agent, decide the ideal actions for it to perform in order for it to achieve its goals efliciently, and finally to modify the base- level agent so that it performs those aclions. l’hc role of the mcta-level agent is Yiriiilar to that of an “ii7strllClion-fetch unit” in many computer systems. The primal y Jil‘fercnce is that for Al prograIns tl~e instruction- fetch can be quitI. complex. R?cta-lc\cl architc~ture is a knowlcclge cnsinccring approach to coping with t!lis cornplcxity. 1 tie central idea in meta-level architecture is the use of a declarative language for describing behavior. Within this language one can write sentences about lhc state of the base-level agent, its actions, and its goals; and the mcta- level agent can reason wirh these senteliccs in deciding the idcal action for the base-lcvcl to perform. +-------------------+ I I Meta-Level 1 I I +-------------------+ I I I I +-------------------+ +---------------+ 1 I I i I Base-Level 1 I I I +---------------+ 1 / E i i nvironment I 1 I I +---------------+ I Figure 1 - Relationship of base-level and mcta-level A key fcaturc of the control language is that it allows partial spccilications of bchavi~~r. One ndvantagc 01’ this is incremental system cl~vclop~r~enl. For cx;m~plc, wc cocltl start from a (pxsibiy sCLi!ch-intCJKiVe) k)giC [?fOgI‘:IlJl alld incrcmt2nlnlly add control informution ui,Lil one c)llL;lilli a (possibly search-free) tradiiional program. Another advantage is the inlegration 01‘ dit‘fcrent Al architcclurcs, c.g. demons, olljcct-oricntcd prograIl!rning, and controllcct deduction. In the ab:;~~w of a language in whi~‘h partial specilicalions of behavior can be expressecl, ninny arbitrary decisions must be ma~!c in implcmcnting an archilccture, thus inhibiling inrcgrnlion. A partial spcci fication langl~? c gc allows an :lrchitcctural idea to bl: formalized without this arbitrary detail. The idea of m&~-level architecture is not a new one. McCarlhy dcscribcd a similar idea in 1959 [McCarthy]; and, over the years a n[;mber of other rcscnrchcrs [Brown, Ko~2;rlski) [Dax:is] [l?o)~lc 198U] \Gallairc, Lasscrrc], [Hayes 19731 [S~lntl~\~~illJ [I<. Smith] [W;JtTcll] [We) IIrauctl] have followed suit. I-lowcver, to dale explicit introspective reasoning has met with liltlc serious application, This 119 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. paper is concerned with the proplems of putting the idea of mcta-level architecture to practical use. Section 2 describes the control language and shows how it can be LIS~C\ to encode various othci architectllrcs. Section 3 presents an interpreter for the language, and section 4 discusses the kres of compiling. The conclusion describes some expcricrlce using mcta-level architecture, mentions some directions for future work, and summarizes the key points of the paper. This paper is an abridged version of a previous paper [Genesereth, Smith]. 2. Control Language In this paper the syntax oT predicate calculus is used for writing mcta-level axioms, though any other language with equivalent expressive power would do as well. Several syntactic conventions are used to simplify the examples. Upper cast letters are used exclusively for constants, functions, and relations. Lower case letters are used [or variables. All free variables are universally quantified. When the base-level agent is a data base program, care musL be taken to avoid ccnfusion between base-level and meta-level expressions. In the examples below, expressions in the base-level language itself arc designated by enclosing quotation marks, e.g. t'F~T~E~( A, B) II. The examples arc sufficiently simple that variahIc5 occurring within quotation rrwks can be assumed to bc meta- variables, i.e. they range over expressions in the base-level language. The vocabulary of the language consists of the logical symbols of predicate calculus, a small application- independent vocabulary of actions, and all the application- dependent symbols necessary to describe the base-Icvel agent and its environment. The vocabulary of actions is listed below. In the descriptions of each symbol, the word “task” is used to refer to an individual occurrence of an action, e.g. the printing of a file or the computation of a sum. If an action is performed repeatedly, each occurrence is considered a separate task, even if the arguments are exactly the same. OPR( <k>) designates the operation of which the task <k> is an instance. IN(<i>,<k>) designates the <i>th input to the task <k>. NT ( < i > , <k> ) designates the < i >th output of the task <k> . BEG( <k>) designates the start time of the task <k>. ENU( <k>) designates the stop time of the task <k>. TIME(<t>) states that <t> is the current time. EXECUTED(<k>) states the the task <k> has taken place or definitely will take pIace. HECOMMENDED(<k>) states that task <k> is the recommendccl action for a program to take. During inference, it’s typical for more than one task to be applicable, and tlscr-supplied ordering information can be used to dctcrminc which of these is rccommcndcd. If more than one task is recommended, it can be assumed that they arc: all equally good. While this small vocabulary allows one to formally si)ccify a wide variety of control structures, it isn’t particularly perspicuous. Expressive power and cuprcssive l‘acility aren’t always cornpatiblc. On the other hand, the Innguage does provide: a fi~undation for more perspicuous langunges, and control specifications cxprcsscd within these Innguages can be automatically I-c-expressed in terms of the vocabulary above and thereby intcl rllingled. 3. Interpreting Control Specifications An interpreter for this Iangt!age can be written in LISP as shown in figure 2. Fach time around the loop, the program updales the timc and computes a subroutine: Tar the base Icvel to perform. The s\lbroutine is then cxccutcd, the execution is noted, and the cycle rcpcats. (DEFINE SCtlEDlJLER () (REPEAT (SETQ rIME (FIND ‘t ‘(TIME t))) (UNASSERT '(TIME ,TIME)) (ASSERT ‘(TIME ,(+ TIME 1))) (SETQ TASK (FIND 'z '(RECOMMENDED z))) (APPLY (GETOPR TASK) (GETARGS TASK)) (ASSERT '(EXECUTED ,TASK)))) Figure 2 - An interpreter for meta-level architecture In deciding on a base-level action, SCHEDULER uses the inference procedure FIND to obtain 2 binding for the variable z that makes the proposition (R~CDMMENDED Z) true. This is the only way in which the mcta-level is called; and so, to be useful, all control axioms must contribute either directly or indirectly to this deduction. A good choice of inference procedure for the axioms in this paper would be some version of forward or backward chaining, with a nonmonotonic treatment of NOT. Efficiency could be enhanced through the use of caching [Lenat] to avoid recomputation and truth maintenance [Doyle 19791 to handle changes in state. In order for FIND to draw proper conclusions from the control axioms, all information about the state of the base- level agent and its environment must be correct. There are several practical ways of keeping this infc)rmation up-to- date. One approach is for the meta-level to have an explicit modci of’ the base-level program and its environment. To do this, it must have a c!escription (axiomatiration) of the initial state of the world and a description of all the effects and non-effects of the base- level actions. After each action the meta-level can LIX these axioms to update its model of the world. A second approach is to equip the meta-level component with sensory capabilities and allow it to examine the base-level when ncccssary. For example, if the meta-level needs to know whether a proposition is in the base-level’s data base, it can be given a sensory action which examines the base- level data base. The appropriate sensory action for each proposition can be determined at system-building time and can be recorded by an appropriate meta-meta-level procccIuraI att,lcillncnt (a “scn1;In:ic attuchlllcnt” as described in [Weyhrauch]). A third approach is to modify the actions of the base-level so that they automaticaily update the data base of the mcta-level. The advantage of this approach is that the addition of facts to the meta-level data base can trigger whatever clcmons depend on them. The choice of approach to use in a given application depen~is on the donlain and the type ol’contrul a\if:lms one inlcnds to write. 111 many situations a conibiiution of approaches is best. 4. Examples The examples in this section illustrate how a variety of different coiltrol structures can be implemented within meLa-level architecture. The examples are not meant to be complete, only suggestive. For more detail, the reader should see [Genesereth, Smith]. 4.1 Demon Systems The characteristic control feature of the blackboard architecture [Erman, Lesser] is the use of demons, or “knowledge sources”, to specify program activity. A demon consists of a set of trigger conditions and an action. Whenever the data base, or “blackboard”, satisfies the trigger conditions for a demon, its action becomes applicable. The system then selects an applicable action and executes it. The encoding of demons in a meta-level architecture is quite simple. The action associated with the demon is given a name, and its conditions are expressed on the left hand side of a corresponding applicability axiom. The axioms below are the control axioms for a simple consultation system. Axiom AI states that, if a proposition is proved and there is a base-level axiom that mentions it on the lcfi har~d side, the system should forward chain on that rule. ‘The second axiom states that, if there is a proposition to be proved and there is a rule that mentions it on the right hand side, then the system should backchain on that rule. The third axiom states that, if there is a projjosition to be proved, it is okay to ask the user so long as the proposition is “askable”. Al : PROVED(p) & INDB( "p=>q") => APPLICABLE(ADDINDB(q,p,"p=)q")) A2: WANTPROVED & INDB( "p=>q") => APPLICABLE(ADDGOAL(p,q,"p=>q")) A3: WANTPROVED & ASKABLE q > APPLICABLE(ASK(q)) Axiom A4 below guarantees that every applicable demon is recommcndcd. For applications where a delnon should be run olily once after being triggered, axiom A4 can be replaced by axiom A5. A4: APPLICABLE(k) => RCCOMMENDED(k) A5: APPLICABLE & NOT f-XECUrED(k) => RECOMMENDED(k) 4.2 Search Control In many AI programs it is common for more than one task to be applicable at each point in time. Axiom 81 shows how ordering axioms catI be used in determining which applicable task is rccommcnded. It states that an applicable task is recommended only if no other applicable task is “preferred” to it. Bl: APPLICABLE(k) & NOT(Ex APPLICABLE(x) & PREFERRED(x,k)) q > RECOMMENDED(k) The axioms below are some examples of how this capability might be used. Axiom 82 constrains a program to perform all backward chaining before asking its user any questions. Axiom B3 states that a program should use r~~lcs of greater certainty before rules of lesser certainty. Axiom ~4 states that, whenever a program has a choice of backchaining tasks to perform, it should work on the one with fewer solutions. 82: OPR(kl)=ADDGOAL & OPR(k2)=ASK => PREFERRED(kl,k2) B3: OPR(kl)=ADDGOAL & OPR(k2)=ADDGOAL & CF(IN(3,kl))>CF(IN(3,k2)) => PREFERRED( kl,k2) Bfl: OPR(kl)=ADDGOAL & NlJMOf SOLNS( IN( 1, kl))iNlJMOFSOLNS( IN( 1, k2)) => PREFERRED(kl,k2) The axioms below show how depth-first and breadth-first search can be clcbc*ribed. Axioms B5 and BB define the depth of a goal in turns of its “clistancc” from an initial goal. Axiom B7 expresses a prcfercnce f’or deeper goals and hence implements depth-first search. k\XiOIn B8 ex p rcsscs a preli:rence for shallow goals and hence implements breadth-first starch. 65: DESIRE(g) => DEPTH(g)=0 BB: SUBGOAL(gl,y2,e,j) & DEPTH(g2)=n => DEPTH(gl)=n+l B7: DEPTH(IN(l,kl)) > DF?Tti(IN(l,k2)) => PREFFRRED(kl,k2) B8: DtPItI(IN(l,kl)) > DEPIIi(IN(l,k2)) => PREFERRED(k2, kl) Obviously thcsc examples do not exhaust the space of possibilities. Breadth-tnpcring, yuicsccncc, and a lnrgt‘ variety of other search techniques can be spccilied in similar fashion. 4.3 Traditional Programmirlg In certain situations it is desirable to specify control in procedural terms. The vocabulary below, along with the vocabulary for tasks, allows the description of incompletely specified programs (having arbitrary concurrency). FIRST(<kl>,<k>) states !liat the task <kl> is the first subtask of the composite task <k>. CPATH(<kl> ,<k2>) means that there is a direct control path from task <kl> to task <k2>. After <kl> is executed, < k2 > becomes executable. LAST(<~~>,<~>) states that task <kn> is the last subtask of task <k>. Due to control branching, a composite task may have more than one last subtask. PRIMITIVE( COP>) states that the action COP> is a base-level subroutine. As an example, consider ;I simple blocks world program to build a tower of three blocks. A I.isl' version is shown below. (DEFUN TOWER (X Y Z) (PUTON Y Z) (PUTON X Y)) Using the voca’oulary above, this program can be specified as shown below. The axiom states that an action k is an instance of' TOWER if it has substeps kl and k2 ~1s shown. One should remember in looking at thi:; description that its size is due to the ract that all control inlbrrnation is stated explicitly. This explicit slatcmcnt is ~,vhat enables the partial description of programs. Such partial descriptions are not possible in traditional progrnmming languages because much of the control and data llow is implicit in the code. OPR(k)=TOWER <=> E kl,k2 OPR(kl)=PUTON & IN(l,kl)=B & IN(2,kl)=C s( OPR(k2)=I'UTON & IN(l,k2)=A & IN(2,kZ)=B & CPATH(kl,kZ) & FIRST(kl,k) & LAST(k%,k) The axioms that enable the execution of this code are shown below. With these axioms the substeps of the TOWER p-ogtm~ woulcl be executed in the proper order. For example, if the task of building a tower of b!ocks A, R, and C becnmc applicable, axiom cl wouid make the call to PUTON with arguments U and C applicable. Ttlcn by axioms c2 and c5 it would bc recommended as well. Once that is executed, the call to PUTON with arguments A and B would bccmc applicable by axiom ~3. Axiom ~4 guarantees that when all of the subactions of a procedure are executed, the procedure is also considered executed. Cl: APPLICABLE(k) & FIRST(kl,k) => APPLICABLE(k1) C2: APPLICABLE(k) & PRIMITIVE(k) & -EXECUTED(k) => RECOMMENDED(k) C3: CPATH(kl,k2) & EXECUTED(k1) => APPLICABLE(k) C4: EXECUTED(kn) & LAST(kn,k) => EXECUTED(k) C5: PRIMITIVE(PUTON) 4.4 Object-Oriented Prograrnming and Procedural Attachment From the point of view of implementation, the primary difl’erence between traditional programming and object- oriented programming lies in the way one’s code is organized. In traditional programming the code to perform an operation on an object is associated with the operation and is usually conditional on the object or its lY Pea In object-oriented programming the code is associated with the object, its class or some superclass, and ,it is usually conditional on the type of operation involved. This orientation allows one to interpret objects as active agents and operations as the results of passing messages to those agents. A typical application is in the world of graphics. One can define each desired shape as a distinct type of object, each with its own methods for redisplay, rotation, panning, etc. The speci fit subroutine for each object-operation pair can be recorded by writing meta-level axioms. For example, the sentences below specify the subroutines for mnnipulating squares. Dl: SQUARE(x) => TODO(DISPLAY,x,DRAWSQUARE) D2: SQUARE(x) => POLYGON(x) D3: POLYGON(x) => ~ODO(ROTATE,x,ROTATEPOlYGON) 04: POLYGON(x) => PLANAR(x) D5: PLANAR(x) => TODO(PAN,x,PANPLANE) Note that this formulation is a slight gcncralization of object-oriented programming. First of all, subroutines are not associated l)rimarily with objects or operations but rather with opcralion-object tuples. Second I y, the handling of mu!ii-nrgurnct~t operations is sim;)tcr than in pure object-oricntcd programming. Axiotns 126 and D7 operationalize the above axioms. D6: APPLICABLC(g(x)) & TODO(g,x,f) => APPLICABLE(f(x)) D7: APPLICABLE(k) & -EXECUTED(k) => RECOMMENDED(k) 4.5 Planning The fundamental characteristic of meta-level architecture is that it allows a programmer to exert an arbitrary amount of control over a program’s operation. The inference procedures defined above exhibit a moderate amount of programmer control. Programming clearly reprcscnts an extreme in which the programmer makes all of the decisions. Planning rcprescnts the other extreme in which the programmer specifies only the program’s goal and the program must decide for itself what actions to take to achieve that goal. The axioms for a planning program consist of those for object-oriented programming together with those for traditional programming. The clifference from object- oriented programming is the that the computation of the TODO relalion is more elaborate. Once a plan has been found, the traditional programming axioms are necessary to execute it. 5. Compiling Control Specifications The primary advantage me&level architecture is that it allows one to give a program an arbitrary amount of procedural advice. In some cases this can lead to substantial (possibly exponential) savings in runtime. Unfortunately, there is a cost in using a mcta-level architecture. Meta-level reasoning can be at least as expensive as reasoning about an application area; and even when this reasoning is minimal, there is some overhead. The problem is particularly aggravating in situations where meta-level control yields no computational savings. One way to cnhancc program cflicicncy is to rcwritc the control axioms in a I'om that lessens meta-level overhead. Given an appropriate inference procedure for the meta- lcvcl, the least expensive control structure is that of a traditional program. Therefore, the idcal is to reformulate an axiom set as a tradition:11 program. This can frequently be done, iT information about the application area is used. A more clrastic approach to incrcnsing the cfliciency of a meta-level architecture is to eliminate meta-level processing whenever possible, e.g. in the execution of a fully specified deterministic program. Given an adequate set of base-level actions, any procedure described at the mcta-level can be “compiled” into a base-level program. This sort of compilation is more complex than that done in logic programming systems like PliOl.OG because the compiler must take into account arbitrary control specifications. me possibility of partial compilation is particularly interesting. In some cases it is possible to compile part of a program into base-level subroutines while retaining a few meta-level “hooks”. A good example of this is an interpreter for an object-oriented programming langllage. The code for each subroutine can be written as shown below. (DEFINE DRAW (X) (FINDMETtiOD ‘DRAW X)) (DEFINE ROTATE(X) (FINDMETHOD ‘ROTATE X)) (DEFINE PAN (X) (FINDMETHOD ‘PAN X)) (DEFINE FINDMETHOD (OP ARG) (CALL (FIND ‘z ‘(TODO ,OP ,ARG z)) ARG)) Each subroutine “traps” to the metn-level to find out which specific subroutine to KX. Answering this qllestion may involve inheritance over a type hierarchy or a more general inference. However, it is more efficient than the standard mela-level architecture because there is a built in assumption that only one subroutine is relevant. Furthermore, the trap can be omitted where the flexibility of meta-level deduction is unnecessary. 6. Conclusions The feasibility of meta-level architecture for practical programming has been tested by its use in the construction of a number of different systems. These include an automated diagnostician for computer !larc!warc fat~lts (DAIU [Gencscrcth g/82]), a ca!cu!us program (1\JI INlhlA [Brown]), a simulator for digilal hardware (SlIAM [Singh]), a mini-tax consultant (111\VONNA [Barnes, Joyce, Tstlji]), and an imp!ementat.ion of the NI’OMYCIN infectious disease diagnosis/tutoring system [Bock, Clanccy]. All of these programs were built with the help of MKS [Genesereth, Greiner, Smith] [Clayton] [Genesercth 1 l/82], a knowledge representation system aimed at facilitating the construction of meta-level programs. The work on DAR’I’ best illustrales the way in which one uses m&-level architecture. The first step was to build a data base of facts about digital electronics, e.g. the behavior of an “and” gate, and the circuit to be diagnosed, e.g. its part types and their interconnections. A general inference procedure (linear inplrt resolution) was then applied to these propositions to generate tests. in order to enhance efficiency, a variety of control axioms were then added. Finally, many of the axioms were compiled (partly by hand) into the resolution procedure. While the architecture described here answers many of the questions concerned with the practical use of meta- level control, there remain enormous opportunities for significant further research. The most important of these is the development of a compiler able to translate arbitrary control specifications into reasonable LISP code. Also important is the discovery of powerful domain- indcpenclcnt control rules, e.g. the ordering 01 conjtincts in problem solving, the choice of problem solving method, knowing when to cache, etc. Finally, the architecture needs to bc extended to the handling of mtlltiple agt‘nts. In summary, the purpose of this paper is to present the details of a practical meta-level architecture for programs. A program written in this style includes a set of base-level actior!s and a set of mcta-level axioms that constrains how these actions are to be used. ‘I’hc mcta-level axioms are written in an expressive meta-level language and arc used by a general infercncc procedure in computing the ideal action to perform at each point in time. The arAitecture is suf’ficicntly gcncral that a wide variety of traditional control structures can be written as sets of metn-level axioms. While this gcncralily engenders a certain amount of inefficiency in any straightforward implementation, a variety of specialized techniques can be used to avoid inordinate overhead. I-inally, experience with meta-level architectures indicates that it makes programs more aesthetic, simpler to build, and easier to modify. Acknowlcdgcmcnts Over the past two years WC have !)cnetittzd by interactions with numerous people. In particular, Jon Doyle, Pat Hayes and Doug I-enat have had considerable impact on our thinking. Spcci:l! thanks to Danny Robrow, Bill Clar~cey, Lcc Errnan, and Kick Hayes-Roth for comments on carlicr drafts of tllis paper. Sil~>~~o~‘I for this lvork was provided by ONI< contract NOOOi4-SI-K-0004. ‘1’. Barnes, R. Joyce, S. ‘I’suji: “A Simple Income Tax Consultant”, Stanford University 1 leuristic Programming Project, 1982. J. A. nnrnctt, I,. Ilrman: “Making Control Decisions in an Itxpert System is a Problem-Solving ‘i’ask”, USC/IS1 Technical Report, April 1982. C. Rock, W. J. Clanccy: “MRS/NI!OMYClN: Representing mcta- control in predicate calculus”, Ill+82-31, Stanford University IIcuristic Programming Project, Dcccmbcr 1982. D. Ijrown: “MINIMA”, Tcknowlcdge Inc., 1982. K. Rrown, R. Kowalski: “Amalgamating Language and Mctalanguage in 1,ogic Programming”, I&c f’rogrotnmittg: K. Clark, S. Tarnlund’ (cds), Academic Press, New York, 1982, pp 153-172. J. Clayton: “Welcome to the MRS Tutor!!!“, HPP-82-33, Stanford University Heuristic Programming Project, November 1982. R. Davis,: “Mcta-Rules: Reasoning about Control”, Artificial Itrielligettce, Vol. 15, 1980, pp 179-222. J. Doyle: “A Truth Maintenance System”, Arltjkial lntel!igcttce, Vol. 12, 1979, pp 231-272. J. Doyle: “A Model for Deliberation, Action, and Introspection”, TR-581, M.I.T. Artificial Intelligence Laboratory, May 1980. I,. Erman, V. J,esscr: “A Multi-I,cvcl Organization for Problem- Solving Using many Diverse, Cooperating Sources of Knowledge”, Proceeditlgs of the Fourlh Ittlcrttnliottal Joint Cottfcretlce otl A rltjiciczl Itilclligrtxe, 1975, pp 483490. H. G&ire, C. I.asscrre: “Mctalcvcl Control for Logic Programs”, Z.ogic I’rogrclmttzittg: K. Clark, S. Tarnlund (cds), Academic Press, New York, 1982, pp 173-185. M. R. Gcnescrcth, R. Grcincr, D. IT. Smith: “The MKS Dictionary”, I IPP-80-24, Stanford University Heuristic Programming Project, November, 1982. M. 1~. Gcncscrcth: “IXagnosis IJsing Hicrarchicnl Design Mod&“, I’t~ocerditrgs of Ihe N~iliottul C’ottfercttce oti Arltjkicrl Itrlclligettce, August 1982. M. R. Gcncscrcth: “An Introduction to MRS for AI Experts”, IIPP- 82-27, Stanford University Efcuristic Programming Project, November, 1982. M. R. Gcncscrcth, D. E. Smith: “Mcta-Icvcl Architecture”, HPP-81- 6, Stanford IJnivcrsity IHeuristic Programming Project, December, 1982. P. Hayes: “A L.ogic of Actions”, Muchine Intelligence, Vol 6, American-Elsevicr, New York, 1970, pp 533-554. P. Hayes: “Computation and Deduction”, Proceeditgs of the Symposiwtl on .4latitettluGcal Foundations of Compukr Science, Czechoslovakian Academy of Sciences, 1973, pp 105117. K. Konolige: “A First Order Formalization of Knowledge and Action for a Multisgcnt Planner”, Technical Note 232, SRI International Artificial Intelligence Center, December, 1980. 1). B. I,cnat, F. Ifayes-Roth, P. I<lnhr: “Cognitive Economy”, FfPP- 79-15, Stanford University Heuristic Programming Project, June 1979. J. McCarthy: “Pogrnms with Common Scnsc”, l’rocrrclttig.s of lhe Teddirrglon Cottfercttce ot1 llre tliccltcrtrizntiott qf Thortxhr l’rocerses. H. M. Stationery Oflicc, I,ondon, 1960. Rcprintcd in Settwiric Informrriion l’rocessittg, M. Minsky (I’d), M 1 I’ t’rcsq, Cambridge, 1968, pp 403-410. J. McCarthy: “First Order Theories of Individual Concepts and Propositions”, in J. Hayes, I>. Michic, I.. Mikulich (cds), Addtitle Ztz/el/igetzce 9, Ellis tlorwood, Chichcstcr, 1979, pp 120-147. C. Rich: “Inspection Methods in Programming”, Artificial Intelligence I <aboratory, June, 1981. Al-‘1X-604. M.I.T. E. Sandcwall: “Ideas about Management of 1.1% Data 11nscs”, Working Paper 86, M.J.T. Artificial Intelligence I*ahoratory, January 1975. N. Singh: “SHAM: A I licmrchical Simulator for Digital Stanford University Ilcuristic Programming Project., 1982. Circuits”, B. Smith: “Reflection and Semantics in a Procedural I anguagc”, TR- 272, M.1.I’. Artificial Intclligcncc I sbor‘ttory, January 1982. D. E. Smith, M. R. Gencscrcth: “Ordering Conjuncts in t’roblcm Solving: Serious Applications of Mct,l-1 .cvel Rcasonillg I.“, I IPF-82- 9, Stanford University I Icuri%ic I’rogrammin~ Plojcct, April 1982. D. F1. Smith, M. R. Gcncscrcth: “l:ill~ling All the Solutions to a Problem: Serious Apy,lications of Mc~,I-1 .cvcl l<c,lsoillllg 1 I.“, H PP- 83-21, Stanford University I lcuristic Programming P~ojcct, 1)cccmbcr 1982. M. St&k: “Planning ancl Mcta-Planning”, Arlificial Itilr;lli~ence, Vol. 16, 1981, pp 111-170. W. VanMelle, A. Scott, J. Ijcnnctt, M. Pcairs: “I<&I’t’CIN Manual”, HPP-81-16, Stanford University I Icuristic I’rogr,ln!ming Project, October 1981. 1). Warren: “Efficient Processing of Interactive Rclation,il Database Qucrics Expressed in Logic”, Proceedings of the seventh VI,DB conference, 1981. R. W. Wcyhrauch: “Prolegomcna to a Theory of Mechanized Formal Reasoning”, Arlijicial Intelligettce, Vol 13, 1980, pp 133-170. R. Wilensky: “Mctn-Planning: Representing and Using Knowledge about Planning in Problem Solving and Natural Language Understanding”, Cognitive Science, Vol. 5, 1981, pp 197-234. R. Kowalski: “Algorithm = Logic+ Control”, Communications of the ACM, Vol. 22, 1979, pp 424-436.
1983
47
241
On Inheritance Hierarchies With Exceptions David W. Etherington’ University of British Columbia Raymond Reiter2 University of British Columbia Rutgers University Abstract Using default logic, we formalize NETL-like inheritance hierar- chies with exceptions. This provides a number of benefits: (1) A precise semantics for such hierarchies. (2) A provably correct (with respect to the proof theory of default logic) inference algorithm for acyclic networks. (3) A guarantee that acyclic networks have extensions. (4) A provably correct quasi-parallel inference algorithm for such networks. 1. Introduction Semantic network formalisms have been widely adopted as a representational notation by researchers in AI. Schubert [1976] and Hayes [1977] h ave argued that such structures correspond quite naturally to certain theories of first-order logic. Such a correspondence can be viewed as providing the semantics which “semantic” networks had previously lacked [Woods 19751. More recent work has considered the effects of allowing exceptions to inheritance within networks [Brachman 1982, Fahlman 1979, Fahlman et al 1981, Touretzky 1982, Winograd 19801. Such exceptions represent either implicit or explicit cancellation of the normal property inheritance which IS-A hierarchies enjoy. In this paper, we establish a correspondence between such hierarchies and suitable theories in Default Logic [Reiter 19801. This correspondence provides a formal semantics for networks with exceptions in the same spirit as the work of Schubert and Hayes for networks without exceptions. Having established this correspondence, we identify the notion of correct inference in such hierarchies with that of derivability in the correspond- ing default theory, and give a provably correct algorithm for drawing these inferences. As a corollary of the correctness of this algorithm, the default theories which formalize inheritance hierarchies with exceptions can be seen to be coherent, in a sense which we will define. We conclude, unfortunately, on a pessimistic note. Our results suggest the unfeasibility of completely general mas- sively parallel architectures for dealing with inheritance struc- tures with cancellation (c.f. NETL [Fahlman 19791). We do ’ This research was supported in part by I.W. Killam Predoc- toral and NSERC Postgraduate Scholarships. ’ This research was supported in part by the Natural Science and Engineering Research Council of Canada grant A7642, and in part by the National Science Foundation grant MCS-8293954. observe, however, that limited parallelism may have some applications, but that these appear to be severely restricted in general. 2. Motivation In the absence of exceptions, an inheritance hierarchy is a taxonomy organized by the usual IS-A relation, as in Figure 1. ANIMAL / I \ REPTILE MAGMAL INSECT / \ DOG CAT /\ POODLE AFGHAN Figure 1 - Fragment of a Taxonomy The 8emantico of such diagrams can be specified by a collec- tion of first order formulae, such as: (z).POODLE(z) 3 DOG(z) (x).DOG(z) 3 MAMMAL (2) (z).MAMMAL(z) 3 ANIMAL(z) etc. If, as is usually the case, the convention is that the immediate subclasses of a node are mutually disjoint, then this too can be specified by first order formulae: (z).MAMMAL (2) 3 7REPTILE( z) (s).MAMMAL (2) 3 -INSECT(z) etc. The significant features of such hierarchies are these: (1) Inheritance is a logical property of the representation. Given that POODLE(Fido), MAMMAL (Fido) is provable from the given formulae. Inheritance is simply the repeated application of modus ponens. (2) Formally, the node labels of such a hierarchy are unary predicates: e.g. DOG( *), ANIMAL ( *). (3) No exceptions to inheritance are possible. Given that Fido is a poodle, Fido must be an animal, regardless of what other properties he enjoys. The logical properties of such hierarchies change dramati- cally when exceptions are permitted; non-monotonicity can arise. For example, consider the following facts about From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. elephants: (1) Elephants are gray, except for albino elephants. (2) All albino elephants are elephants. It is a feature of our common sense reasoning about prototypes like “elephant” that, when given an individual elephant, say Fred, not known to be an albino, we can infer that he is gray. If we subsequently discover - perhaps by observation - that Fred is an albino elephant, we must retract our conclusion about his grayness. Thus, common sense reasoning about exceptions is non-monotonic, in the sense that new informa- tion can invalidate previously derived facts. It is this feature which precludes first order representations, like those used for taxonomies, from formalizing exceptions. In recent years, there have been several proposed formal- isms for such non-monotonic reasoning (See e.g. [AI 19801). For the purpose of formalizing inheritance hierarchies with excep- tions, we shall focus on one such proposal - Default Logic (Reiter 19801. A default theory consists of a set, W, of ordi- nary first order formulae, together with a set, D, of rules of inference called defaults. In general, defaults have the form: +1,..., &I) : B(h...AJ 7(%.-A) where CX, B, and 7 are first order formulae whose free variables are among z1,...,2,. Informally, such a default can be under- stood to say: For any individuals z~,...,z,, if a(zl,...,z,) is inferr- able and a(~,,... ,x,) can be consistently assumed, then infer 7(Sl,...,Z,). For our elephant example, the first statement would be represented by a default: ELEPH.4NT(x) : GRAY(z) 8 -ALBINO-ELEPHANT(x) GRAY(z) From the informal reading of this default, one can see that when given only ELEPHANT(Fred), GRAY(Fred) & TALBINO-ELEPHANT(Fred) is consistent with this; hence GRAY(Fred) may be inferred. On the other hand, given ALBlNO-ELEPHANT(Fred) one can conclude ELEPHANT(Fred) using the first order fact (x).ALBINO- ELEPHANT(x) 3 ELEPHANT(x), but ALBINO- ELEPHANT(Fred) “blocks” the default, thereby preventing the derivation of GRAY(Fred), as required. The formal details of Default Logic are beyond the scope of this paper. Roughly speaking, however, for a default theory, (D,W), we think of the defaults of D as extending the first order theory given by W. Such an extension contains W and is closed under the defaults of D as well as first order theorem- hood. It is then natural to think of an extension as defining the “theorems” of a default theory; these are the conclusions sanctioned by the theory. However, these extensions need not be unique [Reiter 19801. F or a default theory with more than one extension, any one of its extensions is interpreted as an acceptable set of beliefs that one may entertain about the world represented by that theory. In the next section, we show how inheritance hierarchies with exceptions can be formalized as default theories. Default Logic will then be seen to provide a formal semantics for such hierarchies, just as first order logic does for IS-A hierarchies. As was the case for IS-A hierarchies, inheritance will emerge as a logical feature of the representation. Those properties, PI, * . . , P,, which an individual, b, inherits will be precisely those for which P,(b), . . . . P,(b) all belong to a common exten- sion of the corresponding default theory. Should the theory We now show that Default Logic can provide a formal semantics for inheritance structures with exceptions. We adopt a network representation with five link types. Although other approaches to inheritance may omit one or more of these, our formalism subsumes these as special cases. The five link types,3 with their translations to default logic, are: (1) Strict IS-A: A.d .B: A’s are always B’s. Since this is universally true, we identify it with the first order formula: (x).A(x) 3 B(x). (2) Strict ISN’T-A: A.-. B: A’s are never B’s. Again, this is a universal statement, identified with: (x).A(x) 3 -B(x). (3) Default IS-A: A.- >.B: Normally A’s are B’s, but there may be exceptions. To provide for exceptions, we identify this with a default: A(x) : B(z) B(z) (4) Default ISN’T-A: A.* >.B: Normally A’s are not B’s, but exceptions are allowed. Identified with: A(z) : -B z -B(x) (5) Exception: A.------> The exception link has no independent semantics; rather, it serves only to make explicit the exceptions, if any, to the above default links. There must always be a default link at the head of an exception link; the exception then alters the semantics of that default link. There are two types of default links with exceptions; their graphical structures and translations are: have multiple extensions - an undesirable feature, as we shall see - then b may inherit different sets of properties depending on which extension is chosen. 3. A Sernantica for Inheritance Hierarchies With Excep- tions B. A(z) : B(z) d -C,(z) @...@I +&(z) B(z) -_ . \ \ . . . A Cl . . . c, We illustrate with an example from [Fahlman et al I98I]. Molluscs are normally shell-bearers. Cephalopods must be Molluscs but normally are not shell-bearers. Nautili must be Cephalopods and must be shell-bearers. ’ Note that strict and default links are distinguished by solid and open arrowheads, respectively. 105 Our network representation of these facts is given in Figure 2. Shell-bearer Mollusc Cephalopod \I I / Nautilus . -’ Figure 2 - Network representation of our knowledge about Molluscs. The corresponding default theory is: { M(z) : Sb(z) 63 -C(z) SW , (W(Z) 3 M(z), (z).N(z’ 3 C(z), ‘(” : +b(z’ ’ 1N(z’, (z).N(z) > Sb(x)}. +b(z) Given a particular Nautilus, this theory has a unique extension in which it is also a Cephalopod, a Mollusc, and a Shell-bearer. A Cephalopod not known to be a Nautilus will turn out to be a Mollusc with no shell. It is instructive to compare our network representations with those of NETL [Fahlman et al 19811. A basic difference is that in NETL there are no strict links; all IS-A and ISN’T-A links are potentially cancellable and hence are defaults. More- over, NETL allows exception (*UNCANCEL) links only for ISN’T-A (*CANCEL) links. If we restrict the graph of Figure 2 to NETL-like links, we get Figure 3, Shell-bearer Mollusc Cephalopod Figure 9 - NETL-like network representation of our knowledge about Molluscs. which is essentially the graph corresponds to the theory: given by Fahlm an. This network { M(x) : Sb(x) C(z) : M(z) N(x) : C(z) Sb(z) ’ M(z) ’ C(z) ’ CX : -Sb z B -N x ) N(z) : Sb(z) +b(z) , Sb(z) ” As before, a given Nautilus will also be a Cephalopod, a Mol- lust, and a Shell-bearer. A Cephalopod not known to be a Nautilus, however, gives rise to two extensions, corresponding to an ambivalence about whether or not it has a shell. While counter-intuitive, this merely indicates that an exception to shell-bearing, namely being a Cephalopod, has not been expli- citly represented in the network. Default Logic resolves the ambiguity by making the exception explicit, as in Figure 2. NETL, on the other hand, cannot make this exception explicit in the graphical representation, since it does not permit excep tion links to point to IS-A links. How then does NETL conclude that a Cephalopod is not a Shell-bearer, without also concluding that it is a Shell- bearer? NETL resolves such ambiguities by means of an infer- ence procedure which prefers shortest paths. Interpreted in terms of default logic, this “shortest path heuristic” is intended to favour one extension of the default theory. Thus, in the example above, the path from Cephalopod to +hell- bearer is shorter than that to Shell-bearer so that, for NETL, the former wins. Unfortunately, this heuristic is not sul?icient to replace the excluded exception type in all cases. Reiter and Criscuolo [1983] and Etherington [1982] show that it can lead to conclusions which are unintuitive or even invalid - i.e. not in any extension. Fahlman et al [1981] and Touretzky [1981, 19821 have also observed that such shortest path algorithms can lead to anomalous conclusions and they describe attempts to restrict the form of networks to exclude structures which admit such problems. From the perspective of default logic, these restrictions are intended to yield default theories with unique extensions. An inference algorithm for network structures is correct only if it can be shown to derive conclusions all of which lie within a single extension of the underlying default theory. This criterion rules out shortest path inference for unrestricted networks. In the next section, we present a correct inference algorithm. 4. Correct Inference The correspondence between networks and default theories requires defaults all of which have the form: a(q, . . . A) : a(% . * * AJ f.3 7(x,, . . . 9,) 4 B(Zl, . * * J,) Such defaults are called aemi-normal , and can be contrasted with normal defaults, in which 7(z1, . . . , zn) is a tautology. Our criterion for the correctness of a network inference algo- rithm requires that it derive conclusions all of which lie within a single extension of the underlying default theory. Until recently, the only known methods for determining extensions were restricted to theories involving only normal defaults [Reiter 19801. Etherington [1982] has developed a more gen- eral procedure, which involves a relaxation style constraint propagation technique. This procedure takes as input a default theory, (D,W), where D is a finite set of closed defaults,5 and W is a finite set of first order formulae. In the presentation of this procedure, below, the following notation is used: S k w means formula w is first order provable from premises S. S k w means that w is not first order provable from S. CONSEQUENT( y) is defined to be 7. 4 a(q, . . . ,z,) and (a(~,, . . . ,z,) d r(zl, . . . ,zn)) are called the prerequisite and justification of the default, respectively. 6 A default, (y: B 7 ’ is closed iff CY, B, and 7 contain no free variables. H,, + W; j + 0; repeat j+j+ 1; h,+ W; CD,+{}; i+O; repeat Di + { + 6 D I (hi I-- 4, (h, bc -a), W,-I bc -B) 1; if lnull(Di - CDi) then choose S from (Di - GDJ; GDi+, + GDi U {6); hi+l + h, U {CONSEQUENT(G)}; end& i+i+ 1; until r~ull(D1-~ - GDkl); Hj = hi-, untll HJ = Hjel Extensions are constructed by a series of successive approximations. Each approximation, Hj, is built up from any first-order components by applying defaults, one at a time. At each step, the default to be applied is chosen from those, not yet applied, whose prerequisites are “known” and whose justifications are consistent with both the previous approxima- tion and the current approximation. When no more defaults are applicable, the procedure proceeds to the next approxima- tion. If two successive approximations are the same, the pro- cedure is said to converge. The choice of which default to apply at each step of the inner loop may introduce a degree of non-determinism. Gen- erality requires this non-determinism, however, since exten- sions are not necessarily unique. Deterministic procedures can be constructed for theories which have unique extensions, or if full generality is not required. Notice that there are appeals to non-provability in this procedure. In general, such tests are not computable, since arbitrary first order formulae are involved. Fortunately, such difficulties disappear for default theories corresponding to inheritance hierarchies. For these theories, all predicates are unary. Moreover, for such theories, we are concerned with the following problem: Given an individual, b, which is an instance of a predicate, P, determine all other predicates which b inher- its - i.e. given P(b) determine all predicates, PI, . . . , P,, such that this P(b), P,(b), . . . . P,(b), problem it is clear belong to a common extension. For that predicate arguments can be ignored; the appropriate default theory becomes purely propo- sitional. For propositional logic, non-provability is computable. Ezample Consider the network of Figure 4. Given an instance of A, the corresponding default theory has a unique extension in which A’s instance is also an instance of B, C, and D. When &={A) H, = { A, B, -D, C ) H2 = ( A, B, C ) H, = H., = { A, B, D, C ) Figure 4 - Example of Procedure Behaviour the procedure is applied to this theory, it approximations shown. (The formulae in each a& listed in the order in‘ which they are derived.) -D occurs generates the approximation in HI since it can be inferred before 6. The following result is proved in (Etherington 19831: For default theories corresponding to acyclic inheri- tance networka with ezceptiona, the procedure alwaya convergea on an extension. As a simple corollary we have: The default theory corresponding to an acyclic inheri- tance network with ezceptione ha8 at leaat one ezten- sion. The latter result is comforting. It says that such always coherent, in the sense that they define acceptable set of beliefs about the world represented by the networks are at least one network. 5. Parallel Inference Algorithms The computational complexity of inheritance problems, combined with some encouraging examples, has sparked interest in the possibility of performing inferences in parallel. Fahlman [1979] has proposed a massively parallel machine architecture, NETL. NETL assigns one processor to each predicate in the knowledge base. “Inferencing” is performed by nodes passing “markers” to adjacent nodes in response to both their own states and those of their immediate neighbours. Fahlman suggests that such architectures could achieve loga- rithmic speed improvements over traditional serial machines. The formalization of such networks as default theories suggests, however, that there might be severe limitations to this approach. For example, correct inference requires that all conclusions share a common extension. For networks with more than one extension, inter-extension interference effects must be prevented. This seems impossible for a one pass paral- lel algorithm under purely local control, especially in view of the inadequacies of the shortest path heuristic. Even in knowledge bases with unique extensions, struc- tures requiring an arbitrarily large radius of communication can be created. For example, both the default theories corresponding to the networks in Figure 5 have unique sions. A network inference algorithm must reach F ex ten- before propagating through B in the first network and conversely in the second. The salient distinctions between the two networks are not local; hence they cannot be utilized to guide a purely local inference mechanism to the correct choices. Similar net- works can be constructed which defeat marker passing algo- rithms with any fixed radius. F /7:3t. . --- 7. T D . T A . / Figure 5a B F E 7. T D . 1 A . / Figure 56 B 107 Touretzky [1981] h as observed such behaviour and has 8. References developed restrictions-on network structures which admit par- rallel inferencing algorithms. In part, such restrictions appear to have the effect of limiting the corresponding default theory to one extension. Unfortunately, it is unclear how these res- trictions affect the expressive power of the resulting networks. Moreover, Touretzky has observed that it is not possible to determine in parallel whether a network satisfies these restric- tions. A form of limited parallelism can be achieved by parti- tioning a network into a hierarchy of subnetworks, using an algorithm given in [Etherington 1983). A parallel algorithm can then be applied to the individual subnets, in turn. The number of subnets which must be processed is bounded by the number of exception links in the network. Unfortunately, it can be shown that this technique may exclude some extensions of the theory. We have not yet characterized the biases which this induces in a reasoner. AI (1986) Special issue on non-monotonic logic, Artificial Intelligence 13 (1,2), April. Brachman, R. (1982), “What ‘IS-A’ Is and Isn’t”, Proc. Canadian Sot. for Computational Studies of Intelligence-82, Saskatoon, Sask., May 17-19, pp 212-220. Etherington, D.W. (1982) Finite Default Theories, M.Sc. Thesis, Dept. Computer Science, University of British Columbia. Etherington, D.W. (1983), F ormaliting Non-monotonic Reasoning Systems, TR-83-1, Dept. Computer Science, University of B.C. (To appear). Fahlman, S.E. (1979), NETL: A System For Representing and Real- World Knowledge, MIT Press, Cambridge, Mass. Using Fahlman, S.E., Touretzky, D.S., and van Roggen, W. (1981), “Can- cellation in a Parallel Semantic Network”, Proc. ZJCAI-81, Vancouver, B.C., Aug. 24-28, pp 257-263. 0. Conclusion8 Hayes, P. J. (1977), “In Defense of Logic”, Proc. IJCAI-77, Cam- bridge, Mass., pp 559-565. By formalizing inheritance hierarchies with exceptions using default logic we have provided them with a precise semantics. This in turn allowed us to identify the notion of correct inference in such a hierarchy with that of derivability within a single extension of the corresponding default theory. We then provided an inference algorithm for acyclic inheritance hierarchies with exceptions which is provably correct with respect to this concept of derivability. Cur formalization suggests that for unrestricted hierar- Reiter, R. (1980), “A Logic for Default Reasoning”, Artificial Intelligence 13, (April) pp 81-132. Reiter, R., and Criscuolo, G. (1983), “Some Representational Issues in Default Reasoning”, Int. J. Computers and Mathematics, (Special Issue on Computationd Linguistics), to appear. . - Schubert, L.K. (1976) “Extending the Expressive Power of Semantic Networks”, Artificial Inte!!igence 7(2), pp 163-198. chiea, it may not be possible to realize massively parallel marker passing hardware of the kind envisaged by NETL. Fortunately, this pessimistic observation does not preclude parallel architectures for suitably restricted hierarchies. This raises several open problems: la) Determine a natural class of inheritance hierarchies with Touretzky, D.S. (1981) Personal Communication. Touretzky, D.S. (1982), “Exceptions in an Inheritance Hierarchy”, Unpublished Manuscript. Winograd, T. (1980) “Extended Inference Modes in Reasoning”, Artificial Intelligence 13, (April), pp 5-26. exceptions which admits a parallel inference algorithm yet does not preclude the representation of our common- sense knowledge about taxonomies. Woods, W.A. (1975), “What’s In A Link?“, Representation and Understanding, Academic Press, pp 35-82. lb) Define such a parallel algorithm and prove its correctness with respect to the derivability relation of default logic. 2) In connection with (la), notice that it is natural to res- trict attention to those hierarchies whose corresponding default theories have unique extensions. Characterize such hierarchies. 7. Acknowledgments We would like to thank Robert Mercer and David Touretzky for their valuable comments on earlier drafts of this paper. 108
1983
48
242
IMPROVING THE EXPRFSSIWJVESS OF MAN-Y SORTED LOGIC Anthony G Cohn Department of Computer Science University of Warwick Coventry CV4 7AL England Abstract Many sorted logics can allow an increase in deductive efficiency by eliminating useless branches of the search space, but are usually formulated so that their expressive power is severely limited. The many sorted logic described here is unusual in that the quantifiers are unsorted; the restriction on the range of a quantified variable derives from the argument positions of the function and predicate symbols that it occupies; associated with every non-logical symbol is a sorting function which describes how its sort varies with the sorts of its inputs; polymorphic functions and predicates are thus easily expressible and statements usually requiring several assertions may be compactly expressed by a single assertion. The sort structure may be an arbitrary lattice. Increased expressiveness is obtained by allowing the sort of a term to be a more general sort than the sort of the argument position it occupies. Furthermore, by allowing three boolean sorts (representing ‘true’, ‘false’ and ‘either true or false’), it is sometimes possible to detect that a formula is contradictory or tautologous without resort to general inference rules. Inference rules for a resolution based system are discussed; these can be proved to be both sound and complete. 1. Introduction Much research has been directed towards ways of cutting down the search space of mechanised inference systems. One approach is to divide the individuals in the intended interpretation into different sorts and then specify the sorts of the arguments of all the non-logical symbols in the language and the sorts of the results of function symbols; such a logic is known as a many sorted logic(msl). Inference rules can be devised for such a logic so that many inferences which are obviously ‘pointless’ (to the human observer) can easily be detected to be such by the system because functions or predicates are being applied to arguments of inappropriate sorts. Sortal information can thus be viewed as a form of meta-knowledge. Msls provide an simple syntactic way of specifying semantic information. Several mechanised msls have been proposed or built: eg (Reiter, 1981), (McSkimin, 1977), (Weyhrauch, 1976) and (Champeaux, 1978). Sorts in a logic are rather akin to types in conventional programming languages and problems often found in strongly typed programming languages may also occur in msls. In particular the typing /sorting mechanism often reduces the expressive power of the language: it is not long before a Pascal programmer becomes This work has been supported in part by the SERC. frustrated by the impossibility of writmg general purpose library procedures because procedures may not be polymorphic. In this paper we report on a msl which does not arbitrarily restrict the expressive power of the language and in which it is possible to specify detailed sortal information about the non-loglcal symbols which car then be used by the inference rules to reduce the search space. Space constraints only allow a discursive discussion of the logic. Full details including soundness and completeness proofs can be found in (Cohn, 1983). 2. Preliminaries We assume the reader is conversant with the First Order Predicate Calculus and resolution systems. We use upper-case BOLDIT’C letters (possibly with numeric suffices and/or primes) as meta-variables denoting expressions in the object language. We use bold Roman and Greek letters for all other meta-variables. The alphabet of the language is the union of tht following sets: P: a non-empty set of predicate symbok (we use strings composed of entirely of upper-case Roman letters), F: a non-empty set of function symbols (we use strings composed entirely of lower-case Roman letters or numerals), V: a non-empty set of variables (we use lower-case italic letters), Iv,.+ - +,- boolean connectives, fA,Vj: iwb quanti~eras:setthoei universal quantifier A and the existential quantifier V, I[,],‘,‘]: three punctuation symbols. Terms and formulae are formed from the alphabet in the usual way. Terms are either variables or combinations. Formulae are atoms, liter&s, boolean combinations or quantifications. Disjunctions of laterals are called clauses and are usually represented simply as the set of their constituent literals. 3. The Sort Structure Various sort structures occur naturally: disjoint sorts, trees and lattices. We choose to define the sort domain S as the most general such structure, a complete boolean lattice. An interpretation must interpret the top ( Ts) and bottom (Is) elements of S as the universe of discourse (U) and the empty set respectively. The partial ordering on S is called ZJs We also need the lattice operators Us, lls and \s. We usually omit the subscripts on these symbols. It is useful to distinguish those elements m S immediately above I; these disjoint sorts (their interpretations are non overlapping sets) are denoted by S,*. *In earlier presentations of this work the sense of the lattice was inverted, so that the interpretations of T and J- were reversed. The earlier convention followed the Scott-Strachey tradition, but the present convention is 84 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Of course it is not necessary to actually name all the sorts in the sort lattice. It would be ridiculous to have to do so for one would then be forced to think up names for many sorts which might never actually occur m any assertion or during inference. Thus we can distinguish between eponymous and anonymous sorts. Every anonymous sort should be expressible (using Ll, ll and \) purely m terms of eponymous sorts. However, since it is easy for an inference engine to invent names for all the anonymous sorts for internal use, we shall henceforth assume that all sorts are eponymous. It will be seen later that we require a unary predicate symbol for each eponymous sort, so it will be convenient to use this predicate symbol as the sort’s name. Laterals formed from these symbols are called characteristic literals. 4. Sorting Functions Following (Hayes, 1971) we describe the sortal behaviour of a function symbol c1 by a sorting function bi of the same arity which maps S to S. This allows the specification of polymorphic function symbols, The domain equation for sorting functions is S’ + S. For technical reasons the crossproduct operation used to form the domain 9 is in fact not the usual pointwise operation; also, sorting functions may be quite reasonably required to be both strict and continuous,. for details see (Cohn, 1983). Sorting functions can also be used to describe the sortal behaviour of predicates, thus allowing polymorphic predicates. The question arises as to what the sort of an atom should be. One possibility is to invent a sort called BOOL E S which is always interpreted as the set of the two truth values. This is the usual approach, but a better technique is to have a separate, boolean sort lattice, B=fEE,TT,FF,UU{. EE and UU are the I and T elements of B respectively and TT and FF have fixed interpretations of ttruej and ifalse{ respectively. It is obvious that since B is a complete boolean(!) lattice it is just a particular S and all results for S apply to B as well. We can now give the functionality of sorting functions for predicate symbols; if LX E P then ti: Sn + B. It is important to point out that we are not now dealing with a four or even three valued logic. The logic is still two valued; we interpret function symbols as partial functions but predicate symbols are interpreted as relations rather than partial predicates so in any interpretation a well-sorted formula (ie one whose sort is not EE) denotes either true or false; even those formulae whose sort is UU still denote one of the two truth values. (Our definition of satisfiability also ensures that all formulae sorted as TT or FF always denote true or false respectively). The four different boolean sorts exist purely for the benefit of the deductive machinery; formulae sorted as FF or TT can immediately be deduced to be contradictions or tautologies, as appropriate; those which are UU require inference in the usual way to determine a truth value. A formula whose sort is EE is ill-sorted; the deductive machinery will refuse to perform inferences with it and under no interpretation does it denote a truth value; it is as meaningless as if it were syntactically ill-formed. 5. Well-Sorted Formulae Intuitively, a formula is well-sorted iff the sorts of all the sub-expressions ‘match’ the sorts required by their respective argument positions in a consistent manner. In most formulations of msl the sort 71 of a term only matches the sort 72 of its argument position if 71 & 72. However the logic has more expressive power** if the match only fails when 71ll72= 1. Eg, let g E F have a sorting function such that g(<MAN>) = I & g(<WOMAN>) = MAN where S, = iMAN,WOMAN{ and let c be a constant symbol of sort T. We allow g[c] to be well-sorted even though if c is interpreted as a man then g[c] fails to denote Some machinery is required in order to be able to assign a sort to variables since variables do not have sorting functions but clearly the sort of an expression containing variables cannot, in general, be determined without knowing the sort of all constituent terms. The usual technique is to have separate quantifiers for every sort, for the sort of a variable could then be determined from the sort of its governing quantifier. However this would reduce the value of allowing polymorphic sorting functions since an instance of a variable would then have a unique sort associated with it. Eg suppose P is a rank two predicate symbol such that P(<M,W>) = P(<W,M>) = EE P(<M,M>) = P(<w,w>) = uu where S, = tM,Wj. To express that P[z,y] is always true would seem to need two statements if we have sorted quantifiers: A,x ~,yP[x,y] and +p A\wY P[ZlYl It would be much more natural if we could just write. AxAyP[x,y] and let x and y range over M and W as appropriate (ie y should always be of the same sort as z). This is the mechanism envisaged by (Hayes, 1971) and is certainly very convenient. The basic idea is that the sort of variable should be determined by the argument positions it occurs in. If it occurs as an argument to a polymorphic symbol then it may range over several sorts. In this case the sort of the entire expression may vary as a function of the sorts of such variables. Defining the sorting functions for predicate symbols so that they are EE rather than FF for arguments for which they cannot be true reduces the need for explicit sortal preconditions on variables. Eg if S, = tHUMAN,NATNUMJ and we intend LE E P to denote the ‘less than’ relation on the natural numbers, then by sorting LE so that ik( <NATNUM,NATNUM>) = uu LE(T\<NATNuM,NATNUM>) = EE then we can write AxLE[O,x] rather than AX [NATNUM[X ],+LE[o,x]] as we would have to if LE were sorted so that LE(T\<NATNUM,NATNUM>) = FF. the one lattices. usually found in type hierarchies and powerset ** (Kowalski, 1971) Gordon Plotkm attributes this idea and result to In (Cohn, 1983) an algorithm is given to compute a SA g for any formula or term B. Given .$ E E then B(c) gives the sort of B in the environment $,. If H ([)= I then B is ill-sorted in [. If ‘d([ E E) B([)=l (ie if B= IA) then B is ill-sorted. Conversely if ‘v’(c E E) &<)=T (’ ie if E:=T~ then B gives us no information about the precise sort of B; we can regarded an unsorted logic as sorting all formulae and all combinations with TA. If B is a formula with a SA g then we can usefully distinguish the following two subsets of A. i) ATT = !B : B E A & rng(&=~TTj~ ii) hF = ~~ : B E A & rng(B’)=tFF{{ where rng(B) = tB(c) : [ E E & ~(~)#L] If B E ATT then B has sort TT for all environments in which it makes sense and our definition of sati$ability ensures that B is true in all interpretations. If B E hF then B has sort FF for all environments in which it makes sense and our definition of satisfiabihty ensures that B is false in all inteEpretations. If a well-sorted formula B has a SA B K (ATTubF) then clearly UU E rng(B’), so there are environments in which the truth of B cannot be inferred solely from sortal information; B might still be t autolog ous or contradictory but it requires the application of more general inference rules to detect this. In (Cohn, 1983) the correctness of the SA algorithm is proved by showing a certain corrzspondence between an expression B and its SA ,B. Essentially, the correspondence shown is that if B tells us that B is of sort T then in any interpretation the denotation of B must be a member of the set denoted by T in that interpretation. If B is a formula then this simply asserts that if B is of sort TT then it is valid and if it is of sort FF then it is unsatisfiable. Proving this correspondence thus amounts to a soundness theorem for SAs. If B is a combination or stem then we can also prove the converse, that if B informs us that B is of sort T then for all T' C_ T st. ~‘21 there is an interpretation in which B denotes an object in the set denoted by T'. This means that T = UU only when necessary (ie when B is neither a tautology nor a falsehood). So if B is a tautology or a falsehood then it has sort TT or FF as appropriate. Thus ***For technical reasons, in (Cohn, 1983) the equation is actually somewhat different. domain we also have a form of completeness result for SAs. It may also be noted that the SA algorithm can be used to detect integrity violations in a first order data base query language. 6. Many Sorted Inference Rules A complete set of inference rules is given m (Cohn, 1983) which are shown to be sound and complete. Here we only have space to sketch what a set of resolution based inference rules for this msl might look like. The transformation to clausal form is almost identical to the unsorted case except for the need to define the sorting functions for the skolem symbols. This is easily done using the SAs for the formulae in question. 6.1. Taking Instances. We choose to view resolution as two rules: applymg a substitution and resolution. The notion of an instance of a clause obtained by applying a substitution to a clause is different to the unsorted notion for several reasons. Firstly it may be that applying a substitution may yield an ill-sorted clause. Secondly, in some cases characteristic literals have to be added to clause. Eg let P be a predicate symbol sorted so that ij(<T>) = uu & ti(<T\T>) = EE and let c be a constant of sort T. Now both ~P[x]] and tP[c]{ are well-sorted, but it is not the case that Pbli I= twl since any model (5 of lP[x]i where g(c) B O(T) falsifies tP[c]j. Clearly it is only the case that iP[x]i I= fP[C]i when the sort of C is L 7. However since P[c] is well-sorted we should like to be able to instantiate x to c. We can do this provided we broaden our notion of instance so that the instance of tP[x]j obtained by substituting c for x is 1 -~[c],P[c]j We shall call the characteristic literal -T[c] a prosthetic literal since it has to be ‘grafted’ onto the clause lP[c]j in order to preserve soundness. It is also possible that applying a substitution may yield several clauses as an instance. Eg, let the rank two predicate symbol SPOUSE be sorted so that SPCUSE(<M,M>) = SPdUSE(<W,W>) = EE SPdUSE(<M,W>) = SPC%JSE(<W,M>) = UU where S, = fM,Wj. Consider the (very polygamous!) clause lSPOUSE[x,y]{. If we substitute c for x then there are in fact two instances****: i) 1 -M[c], SPOUSE[c,y]j ii) 1 -W[c], SPOUSE[c,y]j We should not find it entirely surprismg that applying a substitution may give rise to multiple instances for certain polymorphic formulae. As long as variables remain unmstantiated then a formula may be polymorphic; however a ground expression has a unique sort and thus can not be polymorphic. SAs can only represent sortal relationships between variables; as combinations are substituted for variables these relationships must be made explicit in the formula by prosthesis and multiple instancing. Although the ****The inference is in fact only sound if a mechanism is provided for restricting the quantification of y to tW{ in (i) and to IMj m (ii). Details are in (Cohn, 1983). 86 possibly explosive effect of multiple instances on the MOTHER(<M>) = FF & MOTHER(<W>) = IJU search space during a proof may appear worrying it is certainly not a product of the msl. In an unsorted logic the original formulae would have been much more more complex or more numerous. Eg the many sorted formula: Ax AySPOUSE[x,y] would have to be reoresented in an unsorted logic bv a formula such as A Y d Ax”~ [[[W+W[YII v [Wb+M[y 11 --) SPOUSE[z,y]] Thus the search space is complex from the start even with uninstantiated variables. In an unsorted clausal resolution system the above formula translates to two clauses each with three literals. This immediately gives a choice of clauses to resolve the literal -SPOUSE[m,n] with. By contrast, m our msl there would be but a single unit clause. Provided terms of sort M or of sort W were substituted for x and y then no prosthesis or multiple instancing would take place and the search space would be much simpler than in the unsorted case. where S, = tM,Wj. We could ‘translate’ this sorting function to a formula in the logic thus, AX[M~THER[X]+W[X]] w e could then use this formula to infer W[c] from MOTHER[c] when c is of sort T However for reasons already stated we prefer not to encumber a theory with these axioms which Just duplicate mformation already present in the sorting functions. Thus we need a special rule to perform inferences such as the above This rule is called so&casting and can be viewed as a resolving of a literal ag ains t the imaginary formula expressing the information content of the sorting function for the predicate symbol of the literal concerned. The rule comes in two forms depending on whether the literal is positive or negative In general, the situation is complicated by the necessity of considering several combinations simultaneously. Eg if Of course if terms of sort T are substituted for both x and y then two instances each with two prosthetic literals would be inferrable; these two clauses would be identical to those inferred in an unsorted logic. However this should come as no surprise since we have no useful sort information about the terms substituted for x and y and thus the sort mechanism cannot operate effectively. If the sort of the term substituted for x is M and the SPdiJSE(<M,W>) = SPCUSE(<W,M>) = UU SPCUSE( <M,M>) = SPdUSE(<W,W>) = EE where S, = iM,W{ and cl and c2 are constants of sort T, then we can infer both [M[cl]-M[c2]] and [W[cl]~W[c2]] from {SPOUSE[cl,c2]{ by sortcasting. sort of the term substituted for y is T then there are two instances but with only one prosthetic literal in each clause. Thus the sortal mechanism delays prosthesis and multiple instancing until actually needed, In an unsorted logic the multiple clauses and the literals expressing the sort restrictions are always unavoidable. 6.2. Many Sorted Resolution 7. Final Remarks Use of a msl is not only notationally convenient, as workers in knowledge representation have found out (eg see (Hayes, 1971), (Hayes, 1978), (McDermott, 1982) , but also can considerably reduce the size of the search space. Implementation of the logic outlined m this paper is in hand. We believe that such a system might prove to be a useful and efficient tool. The resolution rule given in (Cohn, 1983) corresponds to the general resolution rule given in (Robinson, 1979) and (Kowalski, 1971). It is convenient to split the resolution rule into two: a rule for resolving non-characteristic literals and a rule for resolving characteristic literals. Except for the complications already discussed caused by applying a substitution the first rule is identical to the usual unsorted rule. If the structure of S were represented as a set of axioms in a theory then we would not need a separate rule of inference for characteristic literals. However this would not be in the spirit of a msl: the sort machinery should cope directly with sortal inferences. Moreover 8. Acknowledgements I would like to thank Pat Hayes, Ray Turner ant everybody with whom I have discussed this work. 9. References Champeaux, D, “A Theorem Prover Dating a Semantic Network,” in Proc AISB/G/ Conf on AI, Hamburg (1978). Cohn, A G, “Mechanismg a Particularly Expressive Many Sorted Logic,” PhD Thesis, University of Essex (1983). the number of axioms needed to represent S would be very large for even a quite moderate sized S which would increase the search space and clog the system. Therefore we define a special rule for resolving characteristic literals so that, for example, M[C] can be resolved against W[C], even though the predicate symbols differ. Unlike ordinary resolution a literal in the resolvent may directly descend from the hterals resolved upon. Eg if we are resolving against TI[ C] and 72[ C] and 71ll72 = 73 # J- then the literal 73[ C] occurs in the resolvent. Hayes, P J, “A Logic of Actions,” in Machine Intelligence 6, Edmburgh University Press (1971). Hayes, P J, Ontology of liquids, University of Essex (1978). Kowalski, R and Hayes, P J, “Lecture Notes on Automatic Theorem Provmg,” DCL Memo 40, University of Edinburgh (1971).- McDermott, D, “A Temporal Logic for Reasonmg About Processes and Plans,” Cognitive Science 6( 1982). McSkimin, J R and Minker, J, “The Use of a Semantic Network in a Deductive Question Answering System,” in 6.3. Sortcasting Characteristic literal resolution can be viewed as an inference rule that replaces the axioms that define the structure of S. We also need a rule that can directly use the mformation contained in the sorting functions for the non-logical symbols; otherwise we would need a set of axioms to encapsulate this knowledge. Eg consider the sorting function for the predicate symbol MOTHER: Froc /JCAZ 5, Cambridge ( 1977). Reiter, R, “On the Integrity of Typed First Order Data Bases,” in Advances in Data Base Theory, Volume 1 s ed H Gallaire, J Mmker, J M Nicolas, Plenum Press (1981) Robinson, J A, Logic: Form and Function, Edinburgh University Press (1979) Weyhrauch, R, Lecture Notes for a Summer-School, Istituto di Elaborazione dell’lnformazione, Piss (19’78)
1983
49
243
The GIST Behavior Explainer Bill Swat-tout USC/Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90291 Abstract One difficulty in understanding formal specifications is that there are often interactions between pieces of the specification, never explicitly stated, that only become apparent when the specification is analyzed or simulated. Symbolic evaluation has been proposed as a way of making such interactions apparent, but symbolic evaluators often produce enormous execution traces that are tedious and difficult to examine. This paper presents an automated system that employs a number of heuristics to select the most interesting aspects of the trace for presentation. The system uses this information to construct an English description of the trace. Due to the need for summarization and proof reformulation, the direct-translation approach, which worked well in describing specifications statically, is not suitable in this case. This paper describes the system and gives an example of its output. 1. introduction Regardless of the specification language used, formal program specifications can be tough to understand. Yet, because a specification is frequently the means by which a customer communicates his desires to a programmer, it is critical that both the customer and programmer be able to examine and comprehend the specification. Our experience with Gist, a high- level specification language being developed at ISI [l], has indicated that two of the major impediments to understandability are the unfamiliar syntactic constructs of the language and non- local interactions between parts of the specification. These interactions are often not apparent from a casual examination of the specification. In an earlier paper, the Gist paraphraser and English generator were described [6]. These address the syntax problem by directly translating a Gist specification into English. We have found the paraphraser to be useful in both clarifying specifications and revealing specification errors. We expected that the English translation would be useful to people unfamiliar with Gist, because it would make Gist specifications accessible, but we were surprised to discover that experienced Gist users found it helpful for locating errors. The reason is that an English translation gives This research is supported by the Air Force Systems Command, Rome Air Development Center under contract No. F80602 81 K 0056. Views and conclusions contained in this report are the author’s and should not be interpreted as representing the official optnion or policy of RADC, the U.S. Government, or any person or agency connected with them. I wish to thank Robert Balzer, Don Cohen, Nell Goldman, Jack Mostow and Dave Wile for their comments and discussions. the specifier an alternate view of his specification which highlights some aspects of the specification which are easily overlooked in the formal Gist notation. But the paraphraser deals only with the static aspects of a specification. This paper deals with the more difficult problem of making non-local specification interactions apparent by simulating the dynamic behavior implied by the specification. Our approach has been to discover non-local interactions by using a symbolic evaluator, developed by Don Cohen [2], to analyze a specification. As it evaluates the specification, the symbolic evaluator creates a description of the relationships among pieces of the specification. It discovers what sorts of behaviors the specification allows, and what is prohibited by constraints. A symbolic evaluator does not require specific inputs. Instead it develops a description of the range of possible responses to a given range of inputs. Due to this characteristic, it is possible to test a specification symbolically over a range of inputs that would require many test runs if specific inputs were employed. A specifier interested in the behavior of his specification may direct the evaluator to execute one of the actions defined in the specification. As the evaluator executes the action, some apparently possible execution paths may be eliminated due to constraints, and a more detailed description of the inter- relationships within the specification is developed. The symbolic evaluator produces an execution trace, which details everything discovered about the specification during evaluation. The trace includes not only base facts directly implied by the specification, but also any further implications that the evaluator may have derived from the base facts using its theorem prover. In addition. the trace records the proof structures justifying the facts it contains. Unfortunately, the trace is much too detailed and low-level to be readily understood by most people. To overcome that difficulty, we have constructed a trace explainer that selects from the trace those aspects believed to be interesting or surprising to the user and uses that information to produce an English summary. There are a number of problems that make the simple direct- translation techniques (which worked well for the Gist paraphraser) unsuitable for the trace explainer. These problems include: Detail suppression. The trace is much too detailed to be described in its entirety. The trace explainer uses the structure of the specification and heuristics about what the user is likely to find interesting or surprising in selecting what to describe. 402 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. - Proof summarization and reformulation. The symbolic evaluator uses an augmented resolution- based theorem prover in deriving the consequences of the specification. While this approach is arguably attractive for its generality and simplicity, its arcane proof structures could impose a hardship on the user. The trace explainer attempts to reformulate resolution proof structures into more familiar and understandable ones. - Referring expressions. With the Gist paraphraser, it was usually acceptable to use the name given to an object in the specification as its referring expression in the English paraphrase. The trace explainer cannot rely on this technique alone, since there are objects in the trace that do not appear in the specification. Moreover, depending on context, different referring phrases may be necessary even though the same object is being referred to. The next section presents an example specification and a machine-produced description of its symbolic evaluation. Following that, the example will be used to illustrate the initial solutions we have found for the problems listed above. The final section outlines some of the further work needed to extend the capabilities of the explainer. 2. An Example The example presented here is a simplified version of a specification for a postal package router (see [3,5]). The package router is designed to sort packages into bins corresponding to their destinations. A package arrives at a location called the IwJl- j. SOURCE 1 Figure 2- 1: Package Router source and its destination is read there. A binary tree of switches and pipes connects the source with the output bins. It is the job of the package router to set the switches so that the package winds up in the proper destination bin (see Figure 2-l). The simplified specification contains just one switch and two bins. In addition, a location called the input has been defined, which is where all boxes are originally located. The formal Gist specification appears in Figure 2-2. It is not necessary to understand the formal notations, since an English translation of the specification (produced by the paraphraser) is available: Figure 2-3 is the English paraphrase of the specification’s type structure and Figure 2-4 describes the possible actions in this specification. Having defined the type structure and actions, a specifier may wish to define some test sequences of actions to see how the constraints of the specification interact to limit the behavior of the specification in ways that are not obvious from the static specification alone. In Figure 2-5, the user has defined such a test sequence. The user has also given preconditions to define the initial state and the structure of the switching network and a postcondition to describe the final goal of the system. Notice that begin type box(Location 1 location, Destination 1 bin); type location()unique supertype of <input() definition{inputl}; source(Source-outlet 1 switch) definition(Source1); internal-location()unique supertype of <switch (Selected-outlet I internal-location, Outletlinternal-location :multiple) definition{switchl}; bin0 definition(bin1, binZ}>>; agent PackageRouter where action Insert[box] definition update :Location of box from input1 to Sourcel; action Set[switch] precondition -$:Location=switch definition update :Selected-outlet of switch to switch :Outlet; action Move[box] precondition box:Location=Sourcel or box :Location=a switch definition if box :Location=Sourcel then update :Location of box to Sourcel:Source-outlet else update :Location of box to box :Location :Selected-outlet; action Test[] precondition switch1 :Outlet=binl precondition switch1 :Outlet=bin2 precondition Source1 :Source-outlet= swi tchl precondition for all box j j box : Location=inputl postcondition for all box 11 box : Location=box :Destination definition begin Insert[a box]: Move[a box]: Insert[a box]; Move[a box]; Set[a switch]: Move[a box]: Move[a box] end end end Figure 2-2: Formal Gist Specification for Package Router There are boxes, locations and package-routers. Each box has one location. Each box has one destination which is a bin. Internal-locations, sources and inputs are locations. Bins and switches are internal-locations. Bin1 and bin2 are the only bins. Switch1 is the only switch. The switch has one selected-outlet which is an internal-location. The switch has multiple outlets which are internal-locations. Source1 is the only source. The source has one source-outlet which is a switch. Input1 is the only input. Figure 2-3: Paraphrase of Package Router Type Structure A package-router can insert a box, set a switch, or move a box. To insert a box: Action: The box’s location is updated from input1 to sourcel. To set a switch: Action: The switch’s selected-outlet is updated to an outlet of the switch. Preconditions: The switch must not be the location of any box. To move a box: Action: If: The box’s location is sourcel, Then: The box’s location is updated to the source-outlet of sou rce 1. Else: The box’s location is updated to the selected-outlet of the switch that is the box’s location. Preconditions: Either: 1. The box’s location must be source1 , or 2. The box’s location must be a switch. Figure 2-4: English Paraphrase of Possible Actions in the action body, all operands are specified non- deterministically. For example, the first action invocation states that a box is to be inserted, but it does not say which box. The intent of such a statement is that any box may be inserted, as long as no constraints are violated. This non-deterministic reference is one of the freedoms allowed by the Gist specification language which gives the specifier greater expressive power and prevents him from having to over-specify behaviors. Because the user does not have to explicitly select parameters, he can see with just one test action whether it is ever possible to achieve the postconditions using the particular sequence of action invocations given. After the symbolic evaluator runs, the specifier can use the trace explainer to see an overview of the results of symbolic execution (see Figure 2-6). To test: Action: 1. Insert a box. 2. Move a box. 3. Insert a box. 4. Move a box. 5. Set a switch. 6. Move a box. 7. Move a box. Preconditions: For all boxes: The box’s location must be inputl. The source-outlet of source1 must be switchl. An outlet of switch1 must be bin2. An outlet of switch1 must be binl. Postconditions: For all boxes: The box’s location must be the box’s destination. Figure 2-5: English Paraphrase of a Test Action , 1. A box, call it box1 , is inserted. Result: The new location of box1 is source1 . The explainer describes the action invocation as it was stated in the test case. It makes up the name “box7” for this box so that it can be conveniently referred to later. The explainer then describes the result of this action invocation. 2. A box is moved. The box must be box1 since 2.1 For all boxes except boxl, the box’s location is input1 , and 2.2 The precondition of moving a box requires that either: 2.2.1 The box’s location must be sourcel, or 2.2.2 The box’s location must be a switch. Result: The new location of box1 is switchl. Something surprising has happened. In the test case, the action invocation was made with a non-deterministic parameter, but the constraints of the specification force the selection of one particular box, namely box7. The explainer recognizes this sort of behavior as surprising and describes not only the restriction on binding the parameter, but also the reasons behind it. 3. A box, call it box2, is inserted. The box must not be box1 since 3.1 The location of box1 is switch1 , and 3.2 The location of the box to be inserted must be input1 since the update in inserting a box requires it. Result: The new location of box2 is sourcel. 4. A box is moved. The box must be box1 since otherwise, at the start of step 5, the location of box2 would be switch1 but the precondition of setting a switch requires that the switch must not be the location of any box. Result: The new location of box1 is the selected-outlet Figure 2-6: Machine-Produced Description of Symbolic Evaluation of Test (continued on next page) of switchl. Switch1 is not the location of any box. At the start of step 4, box7 is at the switch, and box2 is at source 1. It would appear that either one could be moved in step 4 since both satisfy the preconditions of the move. However, if box2 moved, it would be impossible to execute the next step. So, as the explainer describes, the non-local interaction with step 5 constrains the parameter binding. 5. A switch is set. The switch must be switch1 since there are no other switches. Result: The new selected-outlet of switch1 is an outlet, call it outletl, of the switch. 6. A box is moved. The box must be box2 since the precondition of moving a box requires that either: 6.1 The box’s location must be source1 , or 6.2 The box’s location must be a switch. Result: The new location of box2 is switchl. The proof that the box to be moved must be box2 is actually quite involved. The system currently has no good way of summarizing proofs of this type, so it falls back on another heuristic. The explainer examines the proof structure to find the statement in the specification that was used specifically to constrain this choice and displays it. That is, rather than showing a proof, we just display the parts of the specification that became relevant in constraining this behavior. This heuristic seems to work well, and it provides the explainer with an “escape” so that it can convey some information even if it can’t reformulate the proof. Although it’s usually not too difficult to figure out how the specification statement constrains the behavior, we plan to add a facility to allow the user to ask for further elaborations when he as trouble (see “Future Directions”). 7. A box is moved. The box must be box2. Result: The new location of box2 is outlet1 . For all boxes, the box’s location is the box’s destination. Since the justification for this step is the same as for the preceding, the explainer omits it. Figure 2-6, continued 3. System Organization Our facilities for making specifications more understandable are organized as shown in Figure 3-1. Like the Gist paraphraser, the trace explainer employs an intermediate case frame representation which is converted to English by a relatively straightforward English generator. The explainer itself is organized into individual explanation methods. There are two basic kinds of explanation methods. Trace-based methods can describe particular situations that arise in the trace, such as an action invocation or the justification of a fact found by the evaluator. The other kind, structuring methods, organize the output of the trace-based methods into higher-level explanation structures. For example, one such explanation method organizes two statements into a statement-reason explanation structure of the form “P since Q” (see [8]). There can be several explanation methods that describe the same object or behavior, but at differing levels of detail or highlighting different aspects. It is up to the explainer to choose the most appropriate explanation method for a given situation. Currently, much of this decision-making is handled procedurally. While this organization has been adequate to handle the sorts of specifications shown here, a more sophisticated explanation planning mechanism will probably be needed to handle larger specifications. Figure 3- 1: System Overview 4. Issues in Explaining the Trace The chief problems confronting us in explaining the trace have been 1) selecting and summarizing the most appropriate information to present to the user from the large number of inferences produced by the symbolic evaluator, 2) reformulating the theorem prover’s proofs into a more understandable form and 3) dealing with changing referring expressions. 4.1. Selection and Summarization Both the structure of the specification and heuristics about what the user wants to see are used to guide the summarization of the trace. We assume that a particular specification has the structure it has because it models to some degree the way the specifier thought about the problem. Some of the explanation methods exploit this structure. Consider two explanation methods, both offering descriptions of action invocations. One might use the structure of the specification and produce a very summary description by just translating the invocation statement itself and stating the results of the invocation (similar to the example given above). Another explanation method could give a more detailed description by actually describing the body of the action that was invoked as well. The structure of the specification is a help in summarizing the trace, but it is not enough since many of the facts the evaluator discovers (and the explainer must chose among) come from the interaction of several pieces of the specification. To decide which interactions to present, the system must have some idea of what the user will be interested in. For example, a customer unfamiliar with the specification might want an overview that described the “main line” or normal execution path. On the other hand, the specifier who wrote the specification would want to see the parts of the specification that appear to be incorrect because they use the specification language in a surprising or unusual way. Our current implementation has concentrated on presenting these surprising behaviors, rather than the normal case. 405 What. then, is surprising? We consider things such as While names like box7 or box2 are often sufficient for naming superfluous code, the use of an overly general language symbolic instances they can at times be more confusing than construct, or, worst of all, a specification which is inherently helpful. Consider line 3.2. The box to be inserted referred to there contradictory to be surprising. More specifically, a conditional is in fact equivalent to box2. But substituting box2 in place of the branch which must always follow the same path. constraints which box to be inserted results in a confusing explanation. That’s are never employed, and (as in the example presented here) the somewhat surprising, since one would expect that after naming an use of a non-deterministic parameter that turns out to be object in a description one would be free to use that name to refer deterministic are all surprising. The explainer’s methods to it. The problem is that the order of the description does not recognize surprising situations and describe them to the user. correspond to the ordering of events. The reasoning about which The kinds of surprises described above are language- dependent. Another kind of surprising situation will arise as our work on incremental specification proceeds further. The incremental view of specification states that detailed specifications do not appear all at once, but rather are gradually refined layer by layer from more abstract specifications. Each succeeding layer is in a sense an implementation of the one above it. Surprises will occur when the symbolic evaluator discovers that one layer of a specification does not meet the goals set forth for it at a higher level. box to insert precedes its selection and naming, but in the description, things are reversed and the naming of objects is sensitive to the order of events. The explainer therefore generates the phrase the box to be inserted rather than box2. 5. Future Directions Our current implementations of the symbolic evaluator and trace explainer produced the examples contained in this paper. While our systems are still very much laboratory prototypes, we feel that they have begun to demonstrate the utility of the techniques outlined here in debugging specifications. Even so, we are aware that these techniques will not, by themselves, be sufficient for much larger specifications. The four areas that seem to need attention are the symbolic evaluator, incremental specification, allowing the user to ask follow-on questions about the summaries the explainer provides. and a better mechanism for planning explanations. 4.2. Reformulating Proofs While a resolution theorem prover may be attractive for many reasons, certainly the lucidity of its proofs is not one of them. Our approach to this problem follows that suggested by Webber and Joshi [7]: we attempt to reformulate the resolution proofs into ones that seem more natural. Some of the recognizers we have developed find simple proof structures like modus ponens, while others find more complicated structures such as proof by contradiction or a version of the pigeonhole principle. For example, the pigeonhole rule examined the proof that the box moved in step 2 is box1 and recognized that the proof has the form of successively eliminating possible candidates. Since one reformulation may cover several resolution steps, recognizers like this help both by reducing the amount of information that must be conveyed and by structuring it more appropriately. The current symbolic evaluator is not goal driven. Rather than having a model of what might be interesting to look for in a specification, the evaluator basically does forward-chaining reasoning until it reaches some heuristic cut-offs. In the process it generates interesting as well as uninteresting results, which the explainer must sift through. While this works reasonably well for the small specifications we have been working with, larger specifications could prove overwhelming. One solution may be to make the symbolic evaluator more goal-directed. By giving it, at least at a high level, a model of what might be interesting. it could be more directed in its search. After narrowing the search using goals, the evaluator could then switch to forward-chaining to more completely examine the smaller problem space. Such an approach would benefit both the evaluator because it would run faster, and the exolainer, because the qoal structure would aid At times the recognizers alone provide sufficient information to know how a proof should be described. At other times it is necessary to consider how the proof description fits into the trace description as a whole. For example, in describing step 4 in the example, a hypothetical construction was used: otherwise, at the start of step 5, the location of box2 substantially in generating explanations. would be switch 1 since the selection of the box to be moved was constrained by an event still in the future. 4.3. Referring Expressions Because the symbolic evaluator dynamically creates symbolic The notion of incremental specification has already been mentioned above. Aside from indicating surprising behaviors incremental specification could also improve the performance of the evaluator through higher level abstractions [4], since a few reasoning steps at the high level could replace many low level inference steps. instances of types as it reasons about them, the trace explainer must be able to create names for such objects, even though they never appear in the original specification. For example, /30x7, mentioned in line 1 of the trace description never appears in the specification. It is a symbolic instance created by the evaluator to represent “the box inserted in step 1”. While the evaluator creates a new instance at each action invocation. the explainer is more parsimonious. creating new names only when equivalence to previous names cannot be established. Thus, in step 2, no new name is required to describe the box to be moved since it must be box1 . The current implementation of the explainer makes no provision for the user to ask further questions about the descriptions it produces. However, such a capability is required because the descriptions are produced heuristically. The system may assume that the user will readily understand something that actually requires further description. For the near future, we do not envision allowing the user to ask questions in natural language, but instead, we will let him point at the pieces of the description he did not understand (using a mouse or other pointing device) and ask for further description. 406 Finally, we are currently implementing an explanation planning mechanism that will allow us to represent plans for presenting information. This mechanism will allow us to describe goals and the capabilities of plans along multi-dimensional scales. The dimensions will be either categorical or ordinal. For example, some of the kinds of dimensions that seem to be important in explanation are: the type of object to be described, the form the description is to take, degree of verbosity, and level of detail. The planning mechanism will support matching goals and methods represented in this space, and will provide a mechanism for selecting the most appropriate method when only a partial match can be found. References 1. Balzer. R., Goldman, N. & Wile, D. Operational specification as the basis for rapid prototyping. Proceedings of the Second Software Engineering Symposium: Workshop on Rapid Prototyping. ACM SIGSOFT, April, 1982. 2. Cohen, D. Symbolic execution of the Gist specification language. Proceedings of the Eighth IJCAI, IJCAI, 1983. 3. Hommel, G. (ed.). Vergleich verschiedener Spezifikationsverfahren am Beispiel einer Paketverteilanlage. Kernforschungszentrum Karlsruhe GmbH, August, 1980. PDV- Report, KfK-PDV 186, Part 1 4. Sussman, G. SLICES: at the boundary between analysis and synthesis. Tech. Rept. Al Memo 433, MIT, July, 1977. 5. Swat-tout. W. and Balzer. R. “On the Inevitable Intertwining of Specification and Implementation.” Communications of the ACM 25, 7 (July 1982), 438:440. 6. Swartout, W. Gist English Generator. Proceedings of AAAI-82, AAAI, 1982. 7. Webber and Joshi. Taking the initiative in natural language data base interactions: justifying why. University of Pennsylvania, 1982. 8. Weiner, J. “BLAH, A system which Artificial Intelligence 75 (1980), 19-48. explains its reasoning.” 407
1983
5
244
The Bayesian Basis of Common Sense Medical Diagnosis* Eugene Charniak Department of Computer Science Brown University Providence, Rhode Island 029 12 I Introduction While the mathematics of conditional probabilities in general, and Bayesian statistics in particular, would seem to offer a foundation for medical diagnosis (and other cases of decision making under uncertainty), such approaches have been rejected by most “artificial intelli- gence in medicine” researchers. Typically, Bayesian statistics have been rejected for the following reasons. 1) 2) 3) In its pure form it would require an impossible number of statistical parameters. The only way to escape from (1) is to impose absurd statistical independence assumptions [‘7,9]. And at any rate, Bayesian statistics only works for the single disease situation [3,6] In this paper we will argue that while objection (1) is , . . correct, (2) is not, making (1) moot. Furthermore, while (3) seems to be valid, even there, Bayesian statistics is perfectly compatible with various heuristic solutions to the multiple-disease problem. To reject Bayesian statis- tics on the basis of (3) would be like rejecting closed- form solutions to differential equations because the toughest ones must be solved numerically. Thus we will claim that Bayesian statistics offers a realistic basis for practical medical diagnosis. On the other hand, we will not argue that adoption of Bayesian statistics will lead to better diagnosis pro- grams! Instead we will suggest that the common sense application of Bayesian statistics will lead to programs that will be hard to distinguish from current “non- Bayesian” programs. Or, to put it slightly differently, while programs such as MYCIN [6] and Internist [6] pro- fess to be non-Bayesian, when one ignores what their authors say, and concentrate on what the programs do, “this paper came into being in an attempt to explain the limita- tions of Bayesian statistics. To my initial horror, the more I pressed, the less important the limitations seemed to be. How- ever, in coming to this conclusion I had to have a lot of my con- ftlsions cleared up, and for this I would like to thank Stuart Ge- man, Drew McDermott, JeEery Vitter, and the students of (3201 (particularly Jim Hecdler and David Wittenberg) for their hrJ1n PP;r thanks also to Graeme Hirst, who made helpful comments on an earlier draft of this paper. The confusions which remain are, of course, my own. they turn out to be remarkably Bayesian after all. Thus the goal here is one of clarification. However, this is not to say that absolutely everyone is against Bayesian statistics. Duda, Hart and Nilsson [l] propose a scheme for combining a general inference sys- tem with Bayesian statistics. Indeed, the scheme presented here can be thought of as a continuation of their line of research, since we will assume, as they do, that the initial probabilities will be obtained from intui- tive “guesses” by experts. They also have a nice discus- sion of how to handle the inevitable -contradictory numbers that will arise from such a process. In related work, Pearl [4] shows how distributed systems can quickly update such probabilities. However, to date there is no evidence that such work has had any influence on AI-in-medicine research- ers, and it is not to hard to see why. All such systems require very strong assumptions, regarding both independence of symptoms and exclusivity of diseases - assumptions which are widely believed not to hold in the medical domain. These reservations have not been addressed. This we intend to do. II Bayes’s Theorem Bayesian statistics is based upon the idea of condi- tional probabilities. If we have observed symptoms Sl” *s, then the best disease to postulate would be that disease 6 that maximized P(c& 1 s 1 . . . s,), the con- ditional probability of di given s I + . * s, l. Unfortunately no one knows the necessary numbers to compare. Indeed, given that we would need such numbers for all possible subsets of symptoms, the number of such parameters is astronomical. When we are dealing with fifty or sixty findings, in all likelihood the particular com- bination is unique in the history of the universe. In such cases it is difficult to collect reliable statistics. Now Bayes’s formula offers a way to compute numbers. The form we will use in this paper is this: The standard form of Bayes’s nator the following term: theorem has as its denomi- But, this exclusive EP(s, ' * ' sn I dj)*P(dj) j=1 form requires the dj’s - things the East version to be exhaustive does not require. Unfortunately we do not have into Bayes’s theorem either. the numbers to plug The standard answer to such and ‘Our use cf the terms “s:;LilptoIT,” x, i. ’ ” ‘i.. _ ” c,rz domewhat loose. By “symptom” we simply mean anything which could suggest the presence of a disease, including the presence of a second disease. Nothing however hangs on this. From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. this problem is to make StatisticUl indQpQndQ?tCQ assumptions. We shall return shortly to d iscuss the rea- sonableness of these assumptions. For the moment we will consider how they help the situation. We can sim- plify this last equation by making two such independence assumptions. The first is the independence of two symp- toms. -Formally. this assumption is this: Henceforth we will write this as follows: P(%Isj)=P(Sf,) From this we can infer the following: P(Si&Sj)=P(Si)*P(Sj) The other assumption we need is that two symptoms are not simply independent among people at large, but also in the subset of people suffering from the disease d. P(S*(Sj&d)=P(Si(d) This is equivalent to: P(Si&Sj i d)=P(Si 1 d)*P(Sj [ d) By making these two independence assumptions for all subsets of symptoms we can reduce Bayes’s theorem to: However, even this reduced load of statistical parameters are not known, but rather have to be obtained from physician’s subjective estimates. It seems plausible that physicians have some ideas about the con- ditional probabilities of symptoms given diseases. This is what one learns in medical school. However, the prior probabilities P(4) and P(Sj) are at best known only within a few orders of magnitude.s Nevertheless, to ignore them completely, as is- done by several programs, really amounts to saying that they are all the same, an obviously bad approximation. It is instructive to rewrite the last equation as: where we have defined I(d j s)= w This reformula- tion is useful because it suggests how to modify previous probability estimates when we get a new symptom. Ini- tially we give every disease its prior probability P(G). To take a new symptom Sj into account we multiply the previous probability of disease O$ by r(d, Isj) to get the probability in light of the newest information. Actually, we can make this even simpler. We virtu- ally always modify probabilities by multiplying them by some factor. Because of this, and because probabilities vary over such a wide range (the prior probability of a cough is WlO-‘, but th t f a o some rare disease might be -“lO-i”) it m k a es sense to use the logarithm of probabili- ties rather than the probabilities themselves. Taking the log of both sides of the last equation gives us this: log(P& Is 1 . " S,)>=lOg(P(di))+lOg(I(diIs,))+ ' ' * +lOg(l(d, I sn>> So instead of keeping around the various conditional pro- babilities, we really need to know the logarithm of the I factors, and we modify our belief in the presence of a disease by adding in a number. The result will look much like the Internist[G] updating system (except that Inter- nist does not use the prior probabilities). III Independence Now let us turn to the independence assumptions we needed in order to get this far. As we have &en, the need for such assumptions has been a big argument against a Bayesian approach. However, a bit of reflection should convince YOU that wild non-indeoendence of symptoms will kill any scheme whatsoever. When we take a new symptom into account we have to know how it modifies our belief in different diseases. Indeoendence tells us that the modifications it makes are indkoendent of the other symptoms we have seen. If such simplifying assumption cannot be used, then the modifications change drastically depending on the exact group of symptoms we have seen, and, as we have already noted, such numbers are unavailable on a priori grounds. But are these independence assu-mptions the right assumptions to make? In this section we will show that they a;e not as bad as they have been made out. A. Independence of Symptoms We made two assumptions, the independence of symptoms, and the independence of symptoms given a disease. P(Si 1 sj>=P(Si) P(Si 1 Sj &d)=P(Si 1 d) Of these, the second is more reasonable than the first, if only because the first is completely unreasonable. To see why the independence of symptoms is such a bad assumption, note that if we have two symptoms that are typically the results of the same diseases, then we will tend to see them together a lot. As such, they will not be independent. For example, vomiting and diarrhea go together so commonly that ~(dia?-?%ea&vomiting )>>P(diarrhea) *P(uomiting ) This example is only the most obvious. A little thought should convince you that if two or more symptoms tend to suggest the same disease, that virtually assures that they will not be independent.a Indeed, the independence-of-symptoms assumption is so bad that it is fortunate that it doesn’t matter. Look again at Bayes’s formula with the two independence assumptions factored in. 3 There is a misunderstanding in the literature related to this point. Pednault et. a1[5] claim to show that for a system that obeys both of the independence assumptions, plus exhaustive- ness and mutual exclusivity of diseases, “no updating can take place”. Subsequently Pearl[4] argues that Pednault must be wrong because he has a provably correct updating scheme. Pednault’s result is correct, but it has nothing to da with the process of updating. As suggested from the above discussion, it is rather the case that any such system must also have the unfortunate property that none of the symptoms shed any light 011 the diseases. Thus, updating works correctly, but because the symptoms are completely uncorrelated with diseases, the posterior probabilities are the same as the prior probabilities. It is for this reason that “no updating can take place”. 71 The independence of symptoms gives us the denomina- tor. It is important to note that since the denominator does not mention the disease we are talking about, it will be the same for all diseases. Thus, the error we make by assuming independence of symptoms will be equally fac- tored into our estimate of the probability of all diseases. So not only will this have no effect on the rank ordering of which diseases are most probable, but the ratio of the probabilities of two diseases will remain unaffected (or equivalently, the difference between two log’s of proba- bilities will stay the same). This simply suggests that we should not base decisions on the absolute values of the probabilities, a conclusion that Pople [S], citing an arti- cle by Sherman argues for on empirical grounds4 B. Independence of Symptoms Given a Disease The other assumption we made was the indepen- dence of symptoms given aparticular disease. P(s,b&d)=P(s,ld) This is a much better assumption than plain indepen- dence of symptoms, but, as opposed to the latter, if it fails for particular cases it could affect the ranking of disease possibilities. Let us consider a typical case where it does fail. A person with arterial sclerosis (which we abbreviate U-SC) will sometimes show lung symptoms. Also, if the patient shows one such lung symptom then he is more likely to show a second. In other words: P(lung-qm21 asc&lung-sym,)>>P(lung-sym21asc) Thus the two symptoms are not independent given asc. Again, however, there is an easy solution to this problem. To see it, consider the “common sense” expla- nation of what is going on in this case. A doctor will explain these happenings by saying that asc can cause heart complications. In particular, it can cause one of two pathological states called right heart syndrome and left heart syndrome. In the left heart syndrome, blood backs up into the lungs, so we see various lung symp- toms. Thus, seeing one lung symptom probably means that the patient has left heart syndrome, and is more likely to show other lung symptoms. Such explanations form a typical argument for “causal” reasoning about diseases. Causal reasoning is indeed powerful, but the next step is usually to contrast it with a Bayesian approach. The two are not mutually exclusive. Indeed there is a Bayesian analog of this kind of reasoning. To see how it works, we first introduce the pathological state left heart syndrome into our Bayesian reasoning, by defining the following: P(asc\left-heart-syndrome) P(left-heart-syndromellung-symptom) Then, by analogy with causal reasoning we have this: P(d [s)=P(d Ipath-state)*P(path-state Is) Strictly, this equation only holds given two conditions. The tist is this: P(d 1 not path-state&s)*P(not path-state 1 s) <<P(d [path-state&s)*P(path-state 1 s> ‘Unfortunately, my (possibly early) copy of the Pople has no reference information about the Sherman article. article IllfOl ,,,;llly t..liiS SuJ ,; thtil .Z is only related to d through pathological state. The second assumption is this: P(d Ipath-state&s)=P(d Ipath-state) That is, it requires that d and s are independent given the pathological state. In cases where causal reasoning is appropriate, both of these assumptions should hold. Thus, to mimic causal reasoning, we would first cal- culate the new probability of the pathological state in light of the symptom observed, and then see how our belief in a disease is modified by our belief in the patho- logical state. If we have two symptoms associated with the patho- logical state, when we observe the second the appropri- ate equation is this: LP(d Is,&sz)=LP(d Is,)+LI(path-state 1s~) Given Ll(path-state i s,)>Ll(d 1 sz), this equation correctly predicts the following relation: LP(d \s,&s2)>LP(d Is,)+Ll(d (s2) This result is of interest because it shows how the intro- duction of the pathological state accounts for the fact that s i and s2 are not independent given d. The point is that by assuming the existence of this pathological state we remove this common objection to our independence assumptions. On the-other hand, note that we had to introduce another statistical parameter to get rid of the independence assumption, namely P(d Ipath-state). But this is not a case of merely substi- tuting one parameter for another. If, as seems reason- able, all of the symptoms si associated with path-state are independent given path-state, then we have intro- duced only one new parameter while accounting for many previous cases of symptom dependence. C. What To Do When All Else Fails In the Iast section we saw that we could ignore the non-independence of symptoms, and fix the non- independence of symptoms given a disease by the inclu- sion of commonly recognized pathological states. How- ever, this latter technique will not always work. One example is given by Szolovits and Pauker [9]. The probability of finding a second heart murmur given a diagnosis of heart disease will not be independent of the first murmer. While it would be possible to introduce a “pathological state” to account for this, my medical informant tells me that this is not natural (as opposed to the “right heart syndrome” situation), and instead the doctor must simply learn the relations. But if a doctor must handle this as a special case, then it does not seem unreasonable for our program to do the same thing. Furthermore, we already have at hand a natural way to do this. The technique is provided by MYCIN [8]. In MYCIN, rather than restrict the pro- gram to inference rules of the form symptom ==> disease (with likelihood = X) (which would be the MYCIN equivalent of a Bayesian con- ditional probability), the program also allows (and symptom1 symptom2 . . . symptomfl) ==> disease (with likelihood = X) This latter situation will handle the case when the symp- toms are not independent. Admittedly, we need to worry about cases where, say, all but one of the symptoms are present, and the last is unknown (a case MYCIN does not really handle well), but the extension is not all that 72 complicated. The point is that if there are cases where indepen- dence is violated, then reasonably obvious extensions to the “normal” independent case seem to do the job. IV Multiple Diseases The last of the major objections is that Bayesian statistics can only deal with situations where the alterna- tives are mutually exclusive. Thus there is a real prob- lem in interpreting Bayesian results in cases where more than one disease may be present. If we know there is a single disease then we need only pick the one with the highest posterior probability. If there may be more than one, then what do we do? There is a standard way to accommodate Bayesian statistics to the multiple-disease situation, and that is by creating new “diseases” which are really all pairs, tri- ples, etc. of diseases. Of course, we will have to stop somewhere, or else we would have an infinite number of these new “diseases”, but this number could be made large. As is commonly recognized, however, this “solu- tion” is computationally untractable. Thus Bayesian statistics probably says nothing use- ful about the multiple-disease situation. However, it is false that Bayesian statistics precludes handling multiple diseases. Firstly, the version of Bayes’s theorem we used does not require the diseases to be mutually exclusive (the more typical version does, which is why we did not use the typical version). Next, suppose we use Bayes’s theorem as in the single-disease case, and get a list of diseases, ordered by probability. Now, of course, more than one may have high probability. While Bayesian statistics says nothing about how to make use of such a list, there are well known heuristic techniques which take off from this point. In particular we refer here to the method used in Internist. There the highest ranking disease is used to define a subset of the previously postu- lated diseases which all explain roughly the same symp- toms. Internist assumes, (in accordance with Occams Razor) that only one of these diseases will be present, and tries to distinguish between them by calling for further tests. If these support the previous highest ranking disease, then it is concluded to be present. At this point the “winning” disease, and its competitors, are all removed from the list of postulated diseases along with the symptoms the winner accounted for. If there are still remaining diseases on the list, the process is repeated, with the new probabilities computed on only the remaining symptoms. This goes on until no more diseases are left, or until the assumption of no further diseases is more plausible than any of the remaining diseases. Now admittedly, this procedure owes nothing to Bayesian statistics. However, there is nothing in it incompatible with Bayesian statistics. We can start by collecting Bayesian statistics on the probabilities of each of the diseases. Indeed, this is virtually exactly what Internist does, although it is not labeled as such. v Conclusion Our conclusions can be summarized as follows: 1) While nobody has detailed probabilities for diseases, symptoms, etc. doctors do have order of magnitude estimates. 2) The independence assumptions are not nearly as bad as commonly portrayed. The lack of indepen- dence of symptoms is no problem, and the lack of independence of symptoms given diseases often (most of the time?) can be removed by introducing pathological states and causal reasoning - some- thing most AI programs do anyway. 3) Where independence does break down completely, we can use techniques such as those in MYCIN. (Indeed, the lack of independence should be the only reason for introducing such techniques.) 4) As far as we can tell, multiple diseases cannot be handled by Bayesian methods. However, using Baye- sian methods as if there were no multiple diseases will produce valid results up until the end, when it is no longer possible to pick the top disease as the “winner”. Then we must use heuristic methods. Overall our position in this paper has been conserva- tive in that we have only endeavored to reconstruct already existing programs from a mathematically secure basis. It is possible to hope, however, that more might result from the recognition that mathematical rigor, and common sense practicality, do not necessarily conflict in the domain of medical diagnosis. The recognition that formal logic need not be divorced from knowledge representation theory [2] has lead to renewed interest in the insights of mathematicians and philosophers. Any statistician will recognize that the mathematics in this paper is a couple of hundred years old. Statistics has not stood still in that time. 1. 2. 3. 4. 5. 6. VI References R. 0. Duda, P. E. Hart, and N. J. Nilsson, “Subjective bayesian methods for rule-based inference sys- tems,” in Proceedings of the 2976 National Comput- er Conference, AFIPS Press (1976). Patrick J. Hayes, “In defense of logic,” pp. 559-565 in Proceedings of the Fifth International Joint Conference on Artificial Intelligence, (1977). Stephen G. Pauker and Peter Szolovits, “Analyzing and simulating taking the history of the present ill- ness: context formation,” pp. 109-118 in Computa- tional Linguistics in Medicine, ed. W. Schneider and A.L. Sagvall Hein. (19’77). Judea Pearl, “Reverend Bayes on inference engines: a distributed hierarchical approach,” pp. 133-136 in F+oceedings of the National Conference on Artificial Intelligence, 2 982, (1982). E. P. D Pednault, S. W Zucker, and L. V Muresan, “On the independence assumption underlying subjective bayesian updating,” 213-222 (198 1). Artificial Intelligence 16<2) pp. Harry Pople, “Heuristic methods for imposing struc- ture on ill structured problems: the structuring of medical diagnostics,” pp. 119-185 in Unknown, (forthcoming). Edward H Shortliffe and Bruce G Buchanan, “A model of inexact reasoning in medicine,” Mathemat- ical Biosciences 23 pp. 355-356 (19’75). Edward H. Shortliffe, Computer-Based Medical Con- sultations: MYCIN, American Elsevier Publishing Company, New York (19’76). Peter Szolovitz and S. G. Pauker, “Categorical and probabilistic reasoning in medical diagnosis,” Artificial Intelligence 11 pp. 115-144 (1978).
1983
50
245
Jairne G. Cat bonell Computer Science Department, Carnegie-Mellon University, Pittsburgh, PA 15213 A bst ract Derivational analogy, a method of solving problems based upon the transfer of past experience to new problem situations, is discussed in the context of other general approaches to problem solving. The experience transfer process consists of recreating lines of reasoning, including decision sequences and accompanying justifications, that proved effective in solving particular problems requiring similar initial analysis. The derivational analogy approach is advocated as a means of implementing reasoning from individual cases in expert systems.’ 1. In2roduction: The Role of Analogy in Problem Solving The term “problem solving” in artificial intelligence has been used to denote disparate forms of intelligent action to achieve well-defined goals. Perhaps the most cominon usage stems frorn Newell and Simon’s work [22] in which problem solving consists of selecting a sequence of operators (from a pre-analyzed finite set) that transforms an initial problem state into a desired goal state. Intelligent behavior consists of a focused search for a suitable operator sequence by analyzing the states resulting from the application of different operators to earlier states.2 Many researchers have adopted this viewpoint [12,26, 231. However, McDermott r;: totally different approach has been advocated by 91 and by Wilensky [32, 331 that views problem solving as plan instantiation. For each problem posed there are one or more plans that outline a solution, and problem solving consists of identifying and instantiating these pians. In order to select, instantiate, or refine plans, additionals plans that tell how to instantiate other plans or how to solve subproblems are brought to bear in a’ recursive manner. Traditional notions of search are totally absent from this formulation. Some systems, such as the counterplanning mechanism in POLITICS [6,3], provide a hybrid approach, instantiating plans whenever pcssible, and searching to construct potential solutions in the absence of applicable plans. A third approach is to solve a new problem by analogy to a previously solved similar problem. This process entails searching for related past problems and transforming their soiutions into ones potentially applicable to the new problem [24]. I developed and advocated such a method [7,8] primarily as a means of bringing to bear problem solving expertise acquired from past experience. The analogical transformation ‘Thts research was supported by the Office of Naval Research (ONR) under grant numbers N00014-79-C-0661 and N00014.82~C-50767. 21n means-ends analysis. the current state is compared to+he goal state and one or more operators that reduce the diiference are selected, whereas in heuristic search, the present state IS evaluaied In iso!ation and colnparcd to alternate states rcsultmg from the appkatlon of dlfierent operators (to states generated earlier In the search), and the search for a solution contmues from the highest-rated state. process itself may require search, as it is seldom immediately clear how a solution to a similar problem can be adapted to a new situation. A useful means of classifying different problem solving methods is to compare them in terms of the amount and specificity of domain knowledge they require, 0 If no structuring domain knowledge is available and there iS no US&A past experience to draw upon, weak methods such as heuristic search and means-ends analysis are the only tools that can be brought to bear. Even in these knowledge-poor situations, information about goal states, possible actions. their known preconditions and their expected outcomes is required. Q If specific domain knowledge in the form of plans or procedures exists, such plans may be instantiated directly, recursively solving any subproblems that arise in the process. 8 If general plans apply. but no specific ones do so, the general plans can be used to reduce the problern (by partittoning the problem or providing islands in the search space). For instance, in computing the pressure at a particular point in a fluid statics problem, one may use the general pIall of applying the principle of equilibrium of forces on the point ot interest (the vector sum of the forces = 0). But, the application of this plan tinly reduces the original problem to one of finding and combining the appropriate forces, without hinting how that may be accomplished in a specific problem [5]. 8 If no specific plans apply, but the problem resembles one solved previously, apply analogical transformation to adapt the solution of that similar past problem the new situation. For instance, in some studies it has proven easier for students to solve mechanics problems by analogy to simpler solved problems than by appealing to first principies or by applying gencrai procedures presented in a physics text [!>I. As an example of analogy involving composite skills rathe: than pure cognition, consider a person who knows how to drive a car and is asked to drive a truck. Such a person may have no general plan or procedure for driving trucks, but is likely to perform most of the steps correctly by transferring much of his or her automobile driving knowledge. Would that we had robots that were so self-adaptable to new. if :eccgnirably related tasks! Clearly, these problem solving :>pproaches are not mutually exclusive: for instance, one approach can be used to reduce a problem to simpler subproblems. which can in turn be solved by the otner methods. In fact, Lark% and I [5] are developing a general inference engine for problem solving in the natural sciences that combines ail four approaches. From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. As discussed earlier, only direct plan instantiation and weak methods have received substantial attention by Al practitioners. For inslance, Newell and Laird’s recent formulation or a universal weak method 1171 as a general problem solving engine is developed completely within the search paradigm. Expert systems, for the most part, combine aspects of plan instantiation (often broken into small rule-size chunks of knowledge) and heuristic search in whatever manner best exploits the explicit and implicit constraints of the specific domain [30, 11, 201. I am more concerned with the other two approaches, as the they could conceivably provide powerful reasoning mechanisms not heretofore analyzed in the context of automatirg problem-solving processes. The rest of this paper focuses on a new formulation of the analogical problem solving approach. 2. Analogy and Experiential Reasoning The term analog!/ often conjures up recollections of artificially contrived problems asking: “X is to Y as Z is to ?” in various psychometric exams. This aspect of analogy is far too narrow and independent of context to be useful in general problem solving domains. Rather, I propose the following operational definition of analogical problem solving consistent with past Al specAc Plans Old Problems SOlW?d Figure l-l : Problem solving may occur by a) instantiating specific plans, b) analogical transformation to a known so!ution of a similar problem, c) applying general plans to reduce the problem, d) applying weak rnethods to search heuristically for a possible solution, or e) a combination of these approaches. research efforts [16. 34, 35, 13, 4, 81. Definition: Analogical problem solving consists of transferring knowledge from past problem solving episodes to new problems that share significant aspects with corresponding past experience. In order to make this definition operational, the problem solving method must specify: o what it means for problems to “share significant aspects”, e what knowledge is transferred from past experience to the new situation, o precisely how the knowledge transfer process occurs, Q and how analogically related experiences are selected from a potentially vast long term memory of past problem solving episodes. The remainder of this paper discusses two major approaches to analogical problem solving I have analyzed in terms of these four criteria. The first aaproach has been succes;fuliy implemented in ARIES (Analogical Reasoning and Inductive Experimentation System), and we are actively experimenting with the other approach. This short paper focu s?s on a comparative analysis of the two methods, rather than discussing implementation techniques or examining our prelimmary empirical results. 2.1. Analogical Transformation of Past Solutions If a particular solution 1~1s been found to work on a problem siniilar to the one at hand, perhaps it can be used, with minor modification, for the present problem. By “solution” I n-lean only a sequence of actions that if applied to the initial state of a problem brings about its goal statts. S~mplc though this process may appear, an effective computer implementation requires that many difficult issues be resolved, to wit: 1. Past problems descriptions and their solutions must be remembered and indexed for later retrieval. 2. The new problem must be matched against large numbers of potentially releivant past problems to find closely related ones, if any. An operational similarity metric is required as a basis for selecting the most suitable past experiences. 3. The solution to a selected old problem must be transformed to satisfy the requirements of the new problem statement. In order to achieve these objectives, the initial analogical problem solver [8] required a partial matcher with a built-in similarity criterion. a set of possible transformations to map the solution of one problem into the solution to a closely related problem, and a memory indexing mechanism based on a MOPS- like memory encoding of events and actions [28]. The solution transformation process was implemented as a set of primitive transform operators and a means-ends problem solver that searched for sequences of primitive transformations yielding a solution to the desired problem. The resultant system, called ARIES-I, turned out to be far more complex than originally envisioned. Partial pattern matching of problem descriptions and searching in the space qf solution transformations are difficult tasks in themselves. In terms of the four criteria, the solution transformation process may be classified as follows: 1. Two problems share significant aspects if they match within a certain preset threshcld in the initial partial matching process, according to the built-in similarity metric. 2. The knowledge transferred to the new situation is the sequence of actions from the retrieved solution, whether or not that sequence is later modified in the analogical mapping process. 3. The knowledge transfer process is accomplished by copying the retrieved solution and perturbing it incrementally according to the primitive transformation steps in a heuristically guided manner until it satisfies the requirements of the new problem. (See [8] for details.) 4. The selection of relevant past problems is constrained by the memory indexing scheme and the partial pattern matcher. Since a significant fraction of problems encountered in mundane situations and in areas requirmg significant domain expertise (but not in abstract mathematical puzzles) bear close resemblance to past solved problems, the ARIES I method proved effective when tested in various domains, including algebra problems and route pianning tasks. An experiential 65 learning component was added to ARIES that constructed simple plans (generalized sequences of actions) for recurring classes of problems, hence allowing the system to solve new problems in this class by the more direct plan instantiation approach. However, no sooner was the solution transformation method implemented and analyzed than some of its shortcomings became strikingly apparent. In response to these deficiencies, I started analyzing more sophisticated methods of drawing analogies, as discussed in the following sections. 3. The Beriwational Analogy Method In formulating plans and solving problems, a considerable amount of intermediate information is produced in addition to the resultant plan or specific solution. For instance, formulation of subgoal structures, generation and subsequent rejection of alternatives, and access to various knowledge structures all typically take place in the problem solving process. But, the solution transformation method outlined above ignores all such information, focusing only upon the resultant sequence of actions and disregarding, among other things, the reasons for selecting those actions. Why should one take such extra information into account? It would certainly complicate the analogical problem solving process, but what benefits would accrue from such an endeavor? Perhaps the best way to answer this question is by analysis of where the simple solution transformation process falls short and how such problems may be alleviated or circumvented by preserving more information from which qualitatively different analogies may be drawn. 3.1. The Need for Preserving Derivation Histories Consider, for instance, the domain of constructing computer programs to meet a set of pre-defined specifications. In the automatic programming literature, perhaps the most widely used technique is one of progressive refinement [2: 1, 151. In brief, progressive refinement is a multi-stage process that starts from abstract specifications stated in a high level language (typically English or some variant of first order logic), and produces progressively more operational or algorithmic descriptions of the specification committing to control decisions, data structures and eventually specific statements in the target computer language. However, humans (weil, at least this writer) seldom follow such a long painstaking process, unless perhaps the specifications call for a truly novel program unlike anything in one’s past experience. Instead, a common practice is to recall similar past programs and reconstruct the new programming problem along the same directions. For instance, one should be able to program a quicksort algorithm in LISP quite easily if one has recently implemented quicksort in PASCAL. Similar!y, writing LISP programs that perform tasks centered around depth- first tree traversal (such as testing equality of S-expressions or finding the node with maximal value) are rather trivial for LISP programmers but surprisingly difficult for those who lack the appropriate experience. the key decisions in light of the new situation. In particular, the derivation of the LISP quicksort program starts from the same specifications, keeping the same divide and conquer strategy, but may diverge in selecting data structures (e.g. lists vs arrays), or in the method of choosing the comparison element, depending on the tools available in each language and their expected efficiency. However, future decisions (e.g. whether to recurse or iterate, what mnemonics to use as variable names, etc.) that do not depend on earlier divergent decisions can still be transferred to the new domain rather than recomputed. Thus, the derivational analogy method walks through the reasoning steps in the construction of the past solution and considers whether they are still appropriate in the new situation or whether they should be reconsidered in light of significant differences between the two situations. The difference between the solution transformation approach and the derivational analogy approach just outlined can be stated in terms of the operational knowledge that can be brought to bear. The former corresponds to a person who has has never before programmed quicksort and is given the PASCAL code to help him construct the LISP implementation, whereas the latter is akin to a person who has programmed the PASCAL version himself and therefore has a better understanding of the issues involved before undertaKing the LISP implementation. Swartout and Balzer [31] and Scherlis [25] have argued independently in favor of working with program derivations as the basic entities in tasks relating to automatic programming. The advantages of the derivational analogy approach are quite evident in automatic programming because the of the frequent inappropriateness of direct solution transformation, but even in domains whether the latter is useful, one can create problems that demonstrate the need for preserving or reconstructing past reasoning processes. 3.2. The Process of Drawing Analogies by Derivational Transformation Let us examine in greater detail the process of drawing analogies from past reasoning processes. The essential insight is that useful experience is encoded in the reasoning process used to derive solutions to similar problems, rather than just in the resultant solution. And, a method of bringing that experience to bear in the problem solving process is required in order to make this form of analogy a computationally tractable technique. Here we outline such a method: 1. When solving a problem by whatever means store each step taken in the solution process, including: 0 The subgoal structure of the problem o Each decision made (whether a decision to take action, to explore new possibilities, or to abandon present plans), including: The solution transformation process proves singularly inappropriate as a means of exploiting past experience in such problems, A PASCAL implementation of quicksort may look very different than a good LlSP implementation. In fact, attempting to transfer corresponding steps from the PASCAL program into LISP is clearly not a good way to produce any LISP program, let alone an elegant or efficient one. Although the two problem statements may have been similar, and the problem solving processes may preserve much of the inherent similarity, the resultant solutions (i.e., the PASCAL and LISP programs) may bear little if any direct similarities. The useful similarities lie in the algorithms implemented and in the set of decisions and internal reasoning steps required to produce the two programs. Therefore, the analogy must take place at earlie: more abstract stages of the original PASCAL implementation, and it must be guided by a reconsideration of o Alternatives considered and rejected o The reasons for the decisions taken (with dependency links to the problem description or information derived therefrom) o The start of a false path taken (with the reason why this appeared to be a promising alternative, and the reason why it proved otherwise, again with dependency links to the problem description. Note that the body of the false path and other resultant information need not be preserved,) only its end points are kept for future reference 0 Dependencies of later decisions on earlier ones in the derivation. o Pointers to external knowledge structures that were accessed and that proved useful in the ew?ntUal construction of the solution o The resultant solution itself o In the event that the problem solver proved incapable of solving the problem, the closest approach to a solution should be stored, along with the reasons why no further progress could be made (e.g., a conjunctive subgoal that could not be satisfied). o In the event that the solution depends, perhaps indirectly, on volatile assumptions not stated in the problem description (such as the cooperation of another agent, or time- dependent states) store all dependencies t0 such assumptions made by the problem solver. 2. When a new problem is encountered that does not fend itself to direct plan instantiation or other direct recognition of the solution pattern, start to analyze the problem bY applying general plans or weak methods, whichever is appropriate to the situation. 3. ff after commencing the analysis of the problem, the reasoning process (the initial decisions made and the information taken into account) parallels that of past problem situations, retrieve the full reasoning traces with the same initial segments and proceed with the derivationaf transformation process. lf such traces are formed, consider the possibility of solution transforyation analogy Or, failing that, proceed with the present fine of non- analogical reasoning. o Two problems are considered similar if their analysis results in equivalent reasoning processes, at least in its initial stages. This replaces the more arbitrary con text-free similarity metric required for partial matching among problem descriptions in drawing analogies by direct solution transformation. Hence, past reasoning traces (henceforth derivations) are retrieved if their initial segments match exactly the first stages of the analysis of the present problem, 10 The retrieved reasoning processes are then used much as individual relevant cases in medicine are used to generate expectations and drive the diagnostic analysis. Reasoning from individual cases has been recognized as an important component of expertise [29], but little has been said of the necessary information that each case must contain. And no simple method has been proposed for retrieving the appropriate cases in a manner that does not rely on arbitrary similarity metrics. Here, I take the stand that reasoning from individual cases require that the stored analysis of these past cases contain a derivational history that justifies all the decisions taken to arrive at the conclusion. It is also neccessary to store pointers to external data that proved useful, list of alternative reasoning paths not taken, and failed attempts (coupled with both reasons for their failure and reasons for having originally made the attempt). Case-based reasoning is nothing more than derivational analogy applied to domains of extensive expertise. d) It is important to know that although one may view derivational analogy as an interim step in reasoning from particular past experience as more general plans are acquired, it is a mechanism that remains forever useful, since knowledge is always incomplete and exceptions to the best formulated general plans require that the problem solver reason from past individual reasoning episodes. 4. A retrieved derivation is applied to the current situation as follows: For each step in the derivation, starting immediately after the matched initial segment, check whether the reasons for performing that step are still valid by tracing dependencies in the retrieved derivation to relevant parts of the old problem description or to volatile external assumptions made in the initial problem solving. e If parts of the problem statement or external assumptrons on which the retrieved situation rests are also true in the present problem situation, proceed to check the next step in the retrieved derivation. 8 If there is a violated assumption or problem statement, check whether the decision made would still be justified by a different derivation path from the new assumptions or statements. If so, store the new dependencies and proceed to the next step in the retrieved derivation. The ideas of tracing causal dependences and verifying past inference paths borrow heavily from TMS [lo] and some of the non- monotonic logic literature [i8]. It should be noted that dependancies and therefore consistancy maintenence is local to each derivation. Hence verifying justifications is constrained to the information computing that proved relevance in solving analogically related problems. There is no notion of “global consistency” or global truth maintenence in this formulation. However, the role played by data dependencies in derivational analogy is somewhat different and more constrained than in maintaining global consistency in deductive data bases. o If the old decision cannot be justified by new problem situation, o evaluate the alternatives not chosen at that juncture and select an appropriate one in the usual problem solving manner, storing it along with its justifications, or o initiate the subgoal of establishing the right supports in order for the old decision to apply in the new problem3 (clearly, any problem solving method can be brought to bear in achieving the new subgoal), or o abandon this derivational analogy in favor of another more appropriate problem solving experience from which to draw the analogy, or in favor of other means of problem solving. 8 If one or more failure paths are associated with the 3 This approach only works if the mlssinq or vrolated oremise relates to that oart of the global state under control of the problenl solver, such 3s acquiring a missrng tool or resource. rather than under the control of an uncooperative external agent or a recalcitrant enwonment The dlscusslon of strategy-based counterplannmg gives a more complete accotlnt of subgoaling to .rectify unfulfilled expectations [3,6]. 67 current decision, check the cause of failure and the reasons these alternatives appeared viable in the context of the original problem (by tracing dependency links when required). In the case that their reasons for failure no longer apply, but the initial reasons for selecting these alternatives are still present, consider reconstructing this alternate solution path in favor of continuing to apply and modify the present derivation (especially if quality of solution is more important than problem solving effort). o In the event that a different decision is taken at some point in the rederivation, do not abandon the old derivation, since future decisions may be independent of some past decisions, or may still be valid (via different justifications) in spite of the somewhat different circumstances. This requires that dependency links be kept between decisions at different stages in the derivation. The lack of an explicit path of links between the decisions indicates that the problem solver believes them to be independent of each other. o The derivational analogy should be abandoned in the event that a preponderance of the old decisions are invalidated in ttle new problem situation. Exactly what the perseverance threshold should be is a topic for empirical investigation, as it depends on whether there are other tractable means of solving this prob!em and on the overhead cost of reevaluating individual past decisions that may no longer be supported and may or may not have independent justification. 5. After an entire derivation has been found to apply to the new problem, store its divergence from the parent derivation as another potentially useful source of analogies, and as an instance from which more general plans can be formulated if a large number of problems of share a common solution procedure [8]. 3.3. Efficiency Concerns An important aspect of the dcrivational analogy approach is the ability to store and trace dependency links. It should be noted that some of the inherent inefficiencies that would be present if we were to enforce truth maintenence on very large dependency networks, such as would be ‘required to span an entire knowledge base in maintaining a large deductive data base do not apply to this situation. Since the dependency links are internal to each derivation with external pointers only to the problem description and to any volatile assumptions necessitated in constructing the resultant solution, the dependency networks are quite small. Hence, the size of each dependency network is quite small, compared to a dependency network spanning all of memory. Dependencies are also stored among decisions taken at different stages in the temporal sequence of the derivation, thus providing the derivational analogy process access to causal relations computed at the time the initial problem was solved. The analogical transformation process is not inherently space inefficient, although it may so appear at first glance. The sequence of decisions in the solution path of a problem are stored, together with necessary dependencies, the problem description, the resuftant solulion, and alternative reasoning paths not chosen, :-ailed paths are not stored, only the initial decision that was taken to embark upon that path, and the eventual reason for failure (with its causal dependencies), are remembered. Hence, the size of the memory fsr derivational traces is proportional to the depth of the search tree, rather than to the number of nodes vislted Problems that share large portions of their derivational structure can be so represented in memory, saving space and allowing similarity-based indexing. Moreover, when a generalized plan is formulated for recurring problems that share a common derivational structure, the individual derivations that are totally subsumed by the more general structure can be permanently masked or deleted. Those derivations that represent exceptions to the general rule, however, are precisely the instances that should be saved and indexed accordingly for future problem solving [14]. 3.4. Concluding Remarks Derivational analogy bears closer resemblance to Schank’s reconstructive memory [27, 281 and Minsky’s K-lines [21] than to rather more traditional notions of analogy. Although derivationat analogy is less ambitious in scope than either of these theories, it is a more precisely defined inference process that can lead to an operational method of reasoning from particular experiential instances. The key notion is to reconstruct the relevant decision making processes of past problem solving situations and thereby transfer knowledge to the new scenario. The knowledge consists of decision sequences and their justifications, rather than individual declarative assertions. To summarize, let us describe the process of derivational analogy in terms of the criteria for anaiogical reasoning: 1. Two problems share significant aspects if their initial analysis yields the same reasoning steps, i.e., if the initial segments of their respective derivations start by considering the same issues and making the same decisions. An operational test for similarity is identity of the initial decision sequence. 2. Then the derivation of the retrieved solution may transferred to the new situation, in essence recreating the significant aspects of the reasoning process that solved the past problem. 3. Knowledge transfer is accomplished by reconsidering old decisions in light of the new problem situation, preserving those that apply, and replacing or modifying those whose justifications are no longer valid in the new situation. 4. Problems and their derivations are stored in a large episodic memory organized in a manner similar to Schank’s MOPS [28], and retrieval occurs by replication of initial segments pf decision sequences recalling the past reasoning process. 4. References 1. 2. 3. 4. 5. 66 Balzer, R., “Imprecise Program Specification,” Tech. report RR-75-36, USC/Information Sciences Institute, 1975. Barstow, D. R., Automatic Construction of Algorithms and Data Structures Using a Knowledge Base of Programming Rules, PhD dissertation, Stanford University, Nov. 1977. Carbonell, J. G., “Counterplanning: A Strategy-Based Model of Adversary Planning in Reai-World Situations,” Artificial Intelligence, Vol. 16, 1981, pp. 295-329. Carbonell, J. G., “A Computational Model of Problem Solving by Analogy,” Proceedings of the Seventh International Joint Conference on Artificial Intelligence, August 1981, pp. 147-152. CarbonelI, J. G., Larkin, J. H. and Reif, F., “Towards a General Scientific Reasoning Engine,” Tech. report, 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 13. 20. 21. 22. 23. Carnegie-Mellon University, Department, 1983, CIP # 445. Computer Science Carbonell, J. G., Subjective Understanding: Computer Mode/s of Belief Systems, Ann Arbor, Ml: UMI research press, 1981. Carbonell, J. G., “Experiential Learning in Analogical Problem Solving,” Proceedings of the Second Meeting of the American Association for Artificial Infelligence, Pittsburgh, PA, 1982. Carbonell, J. G., “Learning by Analogy: Formulating and Generalizing Plans from Past Experience,” in Machine Learning, An Artificial Intelligence Approach, R. S. Michalski, J. G. Carbonell and T. M. Mitchell, eds., Tioga Press, Palo Alto, CA, 1983. Clements, J., “Analogical Reasoning Patterns in Expert Problem Solving,” Proceedings of the Fourth Annual Conference of the Cognitive Science Society, 1982. Doyle, J., “A Truth Maintenance System,” Artificial Intelligence, Vol. 12, 1979, pp. 231-272. Duda. R. O., Hart, P. E., Konolige, K. and Reboh, R., “A Computer-Based Consultant for Mineral Exploration,” Tech. report 6415, SRI, 1979. Fikes, R. E. and Nilsson, N. J., “STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving,” Artificial Intelligence, Vol. 2, 1971, pp. 189-208. Gentner, D., “The Structure of Analogical Models in Science,” Tech. report 4451, Bolt Beranek and Newman, 1980. Hayes-Roth, F., “Using Proofs and Refutations to Learn from Experience,” in Machine Learning, An Artificial intelligence Approach, R. S. Michalski, J. G. Carbonell and T. M. Mitchell, eds., Tioga Press, Palo Alto, CA, 1983. Kant, E., Efficiency in Program Synthesis, UMI Research press, Ann Arbor, Ml, 1981. Kling, R. E.. “A Paradigm for Reasoning by Analogy,” Artificial Intelligence, Vol. 2, 1971, pp. 147-178. Laird, J. E. and Newell, A., “A Universal Weak Method,” Proceedings of the Eight Joint Conference on Artificial Inteligence, 1983, (submitted). McDermott, D. V. and Doyie J., “Non-Monotonic Logic I,” Artificial Intelligence, Vol. i3, 1980, pp. 41-72. McDermott, D. V., “Planning and Acting,” Cognitive Science, Vol. 2, No. 2, 1967, pp. 71-109. McDermott, J., “XSEL: A Computer Salesperson’s Assistant,” in Machine Intelligence 70, Hayes, J., Michie, D. and Pao, Y-H., eds., Chichester UK: Ellis Horwood Ltd., 1982”, pp. 325-337. Minsky, M., “K-Lines: A Theory of Memory,” Cognitive Science, Vol. 4, No. 2, 1980, pp. 117-133. Newell, A. and Simon, H. A., Human Problem Solving, New Jersey: Prentice-Hall, 1972. Nilsson, N. J., Principles of Artificial Intelligence, Tioga Press, Palo Alto, CA, 1980. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. Polya, G., How to Solve It, Princeton NJ: Princeton U. Press, 1945. Reif, J. H. and Scherlis, W. L., “Deriving Efficient Graph Aigorithms,” Tech. report, Carnegie-Mellon University, Computer Science Department, 1982. Sacerdoti, E. D., “Planning in a Hierarchy of Abstraction Spaces,” Artificial Intelligence, Vol. 5, No. 2, 1974, pp. 115-135. Schank, R. C., “Language and Memory,” Cognitive Science, Vol. 4, No. 3, 1980, pp. 243-284. Schank, R. C., Dynamic Memory, Cambridge University Press, 1982. Schank, R. C., “The Current State of Al: One Man’s Opinion,” Al Magazine, Vol. IV, No. 1, 1983, pp. 1-8. Shortliffe, E., Computer Eased Medical Consultations: MYCIN, New York: Elsevier, 1976. Swartout. W. and Balzer, R., “An Inevitable Intertwining of Specification and Implementation,” Comm. ACM, Vol. 25, No. 7, 1982. Wilensky, R., Understanding Goal-Based Stories, PhD dissertation, Yale University, Sept. 1978. Wilensky, R., Planning and Understanding, Addison Wesley, Reading, MA, 1983. Winston, P., “Learning by Creating and Justifying Transfer Frames,” Tech. report AIM-520, Al Laboratory, M.I.T., January 1978. Winston, P. H., “Learning and Reasonin; by Analogy,” Comm. ACM, Vol. 23, No. 12, 1979, pp. 689-703. 69
1983
51
246
Abstract CJIL’NKER I$ a chcn program that uscq chunked knoulcdgc to ~&IL\ c success. Its domain is a subset of kii;g and pawn endings in chess th,lt has been studied for 01 er 300 years. CI JUN i<I:R has a large library of chunk inst;lnies where cnch chunk type has a property list and each in&jl:ce has a set of \ cllues for these propcrtics. ‘I-his allows CJ-IIJNI<EJ< to rcawn about positiow, that ccmc up in the starch that \~ould othcrn ise h;l\ e to handled by me,ins of additional search. Thus the program is able to sohe the most difficult problem of its present don~am (a problem that Mould require 45 ply of search and on the order of 1Oi3 lears of CPU tlmc to be solved by the bcsr of prcscnt day chess programs) in 18 ply c(nd one minute of CPU time. Further, CHI_!NKtiK is undoubtedly the world’s foremost expert in its domain, and has diTcovercd 2 mist‘tkes in the litcraturc and has been instrumental in discovering a new theorem about the domain that al!o~s the asscssin g of posi:ions with a new dcgrcc of cast and confidence. In this pdpcr- we describe CHUNKER’s structure and performance, and diqcul;s our plans for extending it to play the whole domain of king and pawn cndings.l 1. Int rsductisn I lumans arc known to CAU& chess positions [4, 51, i.e. treat logically rclatcd groups of picccs as units. ‘I‘hcy do this apparently to aid in evaluation of positions. and to suggest strategies for continuing the gan~c. although psychological evidence does not make it clear exactly ho\+ this is dcnc. l‘hcrc has been at least one program [9] that is able to rccognile some chunks, but no successful chess-playing program has ever used a chunking approach. ‘J’hc preycnt work deals with a program, CHUNKER, that parses a posiGon into chunks, and reasons about the position using information obrainzd from chlmk libraries. Thcrc are several chunk libraries, each corresponding to a fixed group of picccs (a chujlk !~yf). Each chunk type has certain propcrtics, and each chunk ir~nrlce (a particular configuration of the chunk type) has values attached to the properties. These’ are used both in c!aluation, by allowing reasoning with chunk prclperty values to product an evaluation of the whole position, and in move sclccuon, to facilitate reaching positions where this is possible. FViWl jts domain CHUKKER is a true expert, playing the positions whh a speed and accuracy that no present human or machine can come close to matching. 2. ‘Tile Domain I‘!:c domair-1 !‘v L!W <tudh k L.;ng ,tnd thrcl: cr:l~nccrcd p~scd pawns (3~.~l’Pi vs. king and 3CPP, . for cx,tmp!c, I-Ygurc 2-1. ‘I his type of ending has a number of ad\anragcs for our pu~poscs: ITigore 2-1: A position in the domain OD Parsing a position into chunks is rclari\cly straightfi)rward. o ‘I he ending is non-tn\ial; errors hat been folrnd in the litcrarure. and c~cn str:Jng plajcrs ha\,c difficulty M ith the ending’s spcci.il intricacies. o A tdhle driven pn,grarn, wit!1 ;I d,ltabnse of all po\irions, is imprsctlcal, r-cquiring about 22” entries. ‘J‘hc most import,lrlt chunk t:gpe in this problem d(jn>cliti is a king vs. 3CPP (Kv3CPP). for cxamplc Figure 2-7 Gi\m ;i pwitwn such as Flrurc 2-1, much can be dctcrmlncd bq discovcrln2 prcipcl tics of the chunks in isolation and rclatmg them to each other. -1 hc manage,iblc sire of the libraries containing propacrtics of the individual chunks is the key ingredient in the success of this approach. 14 igurc 2-2: A chunk in the &main and pawn endings. the lirst Given that the king wins if all the ~,IM no ;lrc c<Iptur(:d, while the pawns win if one c,lfely reaches rhc ci;rhth r,mk, the b,ltllc within a chunk of I<\ 3CPP can be vicwcd under WI iou? ascumptions. o Sides must alternate moves. 8 ‘J‘hc king has the option to ‘pa\s’, i.c. mnkc a null move. 0 ‘I‘hc pawns habe the option of passing: From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. ‘I’hc passing option corresponds to moving on the other side of the board. Classifying c\‘cr’y Kv3CPl’ configuration with rcspcct to cz.zh i~\sumption nllows ;1 grc;it dc;11 to bc undcr\tood about I]~(‘ chunk. Consider the positions in Figure 2-3. In position 2-3-a. it c;in hc shown that whichc\cr Gtlc has the move loses (under the ,iltcrn:lting moc’cs assumption). ‘I‘hcrcforc. it is clc,rr th;it if one side only has the pissing option. that side will win no matter who is to move. In position 2-3-b, it can bc shown that the pawns win. whcthcr the king h,ls th<b pa$sing option or not. Intnitivcly this means that the pawns arc far enough advanced to force ;I breakthrough. Position 2-3-c illustrates a situation whcrc one of the pawns is lost, cvcn with the passing option. b C Figure 2-3: Some chunk instances ‘I hc other chunk type rclcvant to this domain is king and 3CPP vs. king (K3CITvK). whcrc the lone king has the passing option (Figure 2-4). ‘l’his chunk type corresponds to the situation whcrc a king abandons its battlc against the cncmy pawns and attempts to support its own pawns. Figure 2-4: A K3CPPvK chunk Strictly speaking thcrc is a third chunk type in this domain, namely 3Cl’P unopposed. .l’hc only property of intcrcst hcrc is the number of mo\ cs rcquircd for a pawn to reach the tight rank. It is a simple matter IO calculate this without need of a library. It is rclati\cly cnsy to product librnrjcs for an appropriate set of assumptions. ‘I‘hc basic tcchniquc 16, 31 could bc tcrmcd rctrogradc cnumcrntion. and involves producing an entry for each possible position. ‘1 hc immcdintc wins arc then labcllcd, and the cnumcration proceeds by succcssiccly labclling those positions whusc values can be dctcrmincd based on positions they arc known to lead to. When a side is trying to M in. it is enough to know that one move Icads to a win: when a side is lost. all succcscors must bc shown to 10s~. One of our contributions is to use the tcchniquc, not on whole board positions (whcrc it can have only limited use since the st;ltc space grows cxponcntiall) with the number of piece), but on prc-defined partial board configurarions. In this ending, each of the Kv3CPP libraries is 64 Kbytcs. I‘hc4c lihr:iric\ ,II’C the ,Iltcl‘il;ltillll-t11o\c‘\ (AM) Ilhl,iry. the p;l” Ils-c~lll-]‘,ls\ (I’CI’) 111~1~‘11~~. .nid rhc binf-c,ln-pl\s (Kc’]‘) lil)r,lry. I‘hc K3Cl’l’~ IL llbr,l1~ rc\trlct\ the king\ to CC‘ILIII~ ‘IIT,IS 01‘ t11c ho,lrd. and rcqulrcs 5 12 Kbytcs. 3. The analysis method 3.1. Level 0: The Base Program ‘I‘hc basih of Cl lUNKIII< i\ ;I full-width. dcptll-limited. ;rlphn-beta search. ‘l‘crminal node\ arc those whcrc ;I p.~wn has hnfcly rcachcd the cighrh rank. sIalcmatcs/cllccknlatcc, or draws by rcpctition. IJntlccidcd positions at maximum depth <trc norm,llly ~orcd ‘15 draws. Under ccrt:lin circumstances (king in cheek. p;t~i n on scvcnth rnnk) positions arc allowed to quicscc beyond the depth limit. A number of techniques arc used to inipro\ c scnrching pcrformancc: e Moves arc st,ltically ordered to impro\ c the chnnccs that the best move is considcrcd cnrly in the search. (Alpha- bctn iichicvcs optimal pcrformancc if the best move is considcrcd first in cvcry position [lo]). o A hash tnblc is used to dctcct positions that hnvc occurred previously in the scnrch, and if the position has hccn scnrchcd to a sufficient depth. then the stored \illtIc in the table entry is used [ll, 81. e Moves that cannot bc part of the solution tree arc pruned. For cxamplc, if White has 3 passed pawn that is out of reach of the black king, moves of the black king arc usc1css and not considcrcd. unless the king is attempting to support its own pawns. ‘1%~ base ICICI of CI IUNKI~li i\ ,I~Ic to CV:I~LI,I~C ,Ippr(>xi;ll,lt~Iy 600 positions per second on ii \‘AS 1 I /7X0. l:c,l\ihlc sc,~rch depths r,lngc from about IS-25 ply. dcpcndinf on the rjpc of position. Since most of the \rand;ird positions in the litcr;ilurc rcquirc more than 25 ply to ivdch ;I conclusion (b+hcn &Lying t‘ictics ;irc included), rhc LISC Icvcl of Cl I UN KI:I< h,l\ limited uscfirlncss. ‘l‘hc f;:llowing Icvcls of Cl IUNKl+K arc each built 011 the previous lcvcls. 3.2. Level 1 If it coiild bc shown that <I p,irticular sitlc is \zinning (or lo\ing) in ho//r chunks, it would IX po\sihlc to cla\Gli the o\cr,rll position as n win (or ;I loss). ‘I‘hc i\M libr,Iry provide\ this c,ip,lbillry. ‘l‘nblc 3- 1 iliustr~ltcs the C;ISCS that can bc cLl\sllicd hy this nlcthc~~. I’ach chunk has two associated ~alucs, corresponding to ~hc results for WK/BP Chunk WI‘M W 13’1 MW I{K/WP Chunk W’I‘M W \V’I‘bI 13 WI“M II IYI’M 13 1x1 MW IYI‘M 13 W’I’M W w/w W/? W/W ?/? IYl‘M W W’I’M W W/? ?/? W/II ?/II IH’M 13 W’I‘M I3 W/W W/I3 II/W B/l3 13 I‘M W W’I‘M n ?/? ?/B H/13 13/B IYI‘M 11 Table 3-1: Positions dccidcd by AM library CYhitc IO mo\c and I!Llck to move. I‘lic Idlxlling of the IONS and coliii~~~ic indic,ltcs w1lic.h \itlc is to mo\ c ‘ind \I hich side wins (c.g. M’ I‘hl I{ IIIC.II~E M’llirc to IIIO\‘C. I3lack wins). ‘I IlC Collll~l1~~ indiCiitC thccc I,rol,c‘ltic\ lilr the uliitc-kli~~/l~l,l~k-lla\L!: chunk, ;ind the rows fi)r rhc I,l,l~k-king/\vllilc-l,awll chunk. ‘I hc first cnrr) in ‘\ t,iblc slot is ;Ippropri,itc 11. it i4 White to move, ‘ ,111ci the ccc~~nd it‘ it is IlLrck to move. If thcrc is not ;I ‘? ’ in the entry at the appropri;itc loc,ltion in rhc Ltblc, then the value (of rhc wllolc position is known. ‘I’hc IISC of this table allows Cl IUY KI<R to tcrmin.ltc \carching a branch in many positions (‘lpproslllLllcly 18% of LllC tot;11 lcp 3 >;I) st~tc space) that prc\iously rcqtiirctl I;IIJ!C sc,~rclic~ (nioi~c than IO nodcc) to solcc (see Section 4). Of course n-uiiy types of positions cannot bc cla>sificd by the above nppro‘lch, and so positions such as l;igurc 2-l remain intrucLiblc. 3.3. Level 2 It can bc dcmonstr;ltcd that 3CPP with the pnsGng option always win against a king If none of the pawcns can bc safely captured by the king. I‘hc PCP lihrar) uses this ob\crvation to cvaluatc positions in which one side has lost ;I pawn. If Side A has lost one pawn (such that the pawns now lost in the Ah1 library), then side 13 is given a win if he can keep all 3 of his pawns. .I his Vzhcma can classify about 16% of the total Icg<11 positions, but is not cntircly disjoint from lcvcl 1. 3.4. Level 3 ‘l‘hc KCP library allows Cl IUNKI~R to classify positions whcrc the pawns win into two types: those \\+crc the pawns arc strongly enough placed to force a breakthrough. and those that rely on ;1~~1vt711~, i.e. the compulsion of the king to move. Only if a position appeals as a win for the pawns in the KCP library do the pawns have a forced breakthrough. ‘l‘his knowlcdgc allows many ful thcr positions to bc terminal nodes in the starch. If Player A can force a breakthrough while Player 13 rcquircs ;lugzwang to win, Player A simply forces his . ;;::ns through. hc is no longer under any compulsion to mocc his g. Approximately 18% of the total positions call bc cla?sificd in this way, ;ilthough thcrc is some obcrlap With Lcvcl 1. 3.5. Level 4 Lip until this point. CIIUNKER has made no LISC of the numerical wlucs stored in the Iibrnrics. only their ‘parity’. In attempting to cl,lssifv positions \\hcrc both Gdcs can force pawn bIcakthroughs, it is tempting to ~SSLI~IC thitt CA side M ill push its pawns forward SO that the side that breaks through in the fcwcst ply is the winner. Wnfortunatcly this fails due to the fact that certain pawn moves rcquirc king rcsponscs (c.g. tllc king is plnccd in cheek). ‘l‘hc solution that allows this type of position to bc cvaluatcd is to keep an auxiliary library of spipnre Ien!l~i for each position, i.c. the number of f,cc moves the king has bcforc the pawns break through. ‘I’hus all(,wirlg the king to pass adds one to the number of spare tcmpi, while responding to a Position I.cvel 0 I.cvcI 1 IXVCl 2 535: (33) 536: (33) 537: (25) 538: (35) 539: (23) 540: (27) 541: (31) 542: (33) 543: (17) 544: (30) 545: (27) 546: (33) 547: (45) ---- 9879 (13) --_- 10381(13) -_-- 2678 (9) ---- 3636 (8) 307749 (21) 1 CO) ---- 1 (0) __-- l(O) ---- 13 (2) 9051 (15) ‘1(O) __-- 8778 (11) 940534 (23) 3804 (7) ---- 110468 (17) 6438 (13) 1944 (13) 10497 (13) 6002 (13) 2498 (9) 1677 (7) 3572 (8) 1717 (8) l(O) l(O) l(O) 1 (0) 1 KU 1 (0) 13 (2) 13 (2) l(O) l(O) 8829 (11) 1232 (10) 3804 (7) 25 (4) 102421(17) 7905 (14) ____ -_-_ chwk dots not. In position\ whcrc both sides have paLin brcnkthroughs. it can bc showli th::t tt:c $!dc v,~th tilt fcwcst spare tcmpi (acc()iiiiting for dc-kJ-mnc cffcct‘,) Will bc. ‘I‘his method classifies ilpj)~ClXil~l~lt~l)i 12% of the ICgJ positions. 3.6. Level 5 A thcorcm ha5 been dc\clopcd about a ccrtdin c&s of chunks which wc call ‘%’ configurations. A % confi,gur,ltion is one in which the pawns win, \+hichcvcr side is to move (dozrble MI/I), and the win is accomplished by ~~g/wang, not breakthrough (as dctcrmincd from the KCP library). Further, in a % configuration the pawns ;trc able to maintain the double win against any king move, cithcr because the new position i\ still a % col;li_:ur:ltion or by making a pawn move that establishes a new % configuration. ‘l’hc thcorcm st,~tcs: If‘rhepawns of one side have xt up a % conjiguration ~hcy carmol lose u~~lcss the olher side hs a hreal;rhroq$. and lllcq’ M,ill 1tvt1 uulc~s rhe other side also has a % cot~fig~~ratim. in \~Aich case the posiriotl is 17 draw. SW [2] for the proof ,)f this thcorcm. and examples of % configurations. This schema classifics less than .l% of the total state space. but will nonetheless be shown to have a significant effect when it is applicable. 4. Results The most authoritative book on this subject, Pmn Endings [I], contains 13 positions (535:547) in this ending. ‘Iable 4-l gives the results of CHUNK ER’s starch of these positions. ITach entry includes an estimate (in parcnthcscs) of the starch depth required for a complctc solution. (l‘hosc problems that were solved at level 0 at a lcsscr depth achicvcd this due to the quicsccnce search.) Within the table. the first figure is nodes visited and the parcnthcsi7ed number is the ply limit. Entries containing ‘----’ are intractable (more than lo6 nodes visited). The monotonic improvement across rows is striking3 The addition of a singic new schema for classifi,mg positions can produce speed-ups of 6 orders of magnitude or more by avoiding the searches that would be required without that schema. In many positions, adding a schema products littic or no improvcmcnt in the pcrformancc. This could mean that the solution was already minnnal, or that the new knowlcdgc itern hdd limlted rclcvancc to the given position. When a new schema is rclcvant though, search improvements can be spectacular: there arc many instances of improvcmcnts of over three orders of magnitude. 3. I‘WO excep!lons occur ir: the table Paradoxically. it is possible for alpha-beta search to perform n~orc poorly u!~n knowledge IS added Conqldcr the case where the first mo~c cxamincd In a pnsltloil loses Suppow a search wh a ccriam schema finds this los\, v+hk Lhc search alone cannot (and scores the povtion as a draw) In the !attcr CX% the ;mpro\cd alpha x~luc of 0 (draw) up from UK sarxng \aluc of -1 (assumed loss), allows more cutoffs in the rcmamder of the search. I,cvcl 3 I *eve14 Level 5 108 (6) 185 (8) 60 (4) 223 (8) l(O) 1 (0) 1(O) 13 (2) l(O) 509 (10) 25 (4) 3332 (14) 211513 (18) 108 (6) 185 (8) 60 (4) 2 (1) l(O) 1 (0) 1 (0) 1 (0) l(O) 507 (10) l(O) 1(O) 23962 (18) ‘I‘ablc 4-I: Pcrformancc on ‘I’cst Positions ‘I‘hc cfl;‘ct 01‘ the ILIC~ tahlc on thcsc scarch~‘c should not bc cliscountcd. t:or c~~niplc. position 537. *;carchcd ;\I Ic\,cl 5 withottt the ll~l~,ll r;tblc. Icqulrcd 4?\!,507 norlc\. ;’ I;lctol- OF 17 slower. Other po\ilion~ ,~lstJ sllow Coil\iJcrdiI!c pcrti~:~mancc bcncfits, ccpccially those rcq uiring Iargc sc,~rchcs. ‘l’o E,‘III~ ;I fccl~ng for the importance of ihc rcspcctivc schcmas, the pcrccnt~~g~ ot tcrlnin:ll no&s clascificd by each arc listed below. ‘I’he d&l is dcrivctl from &IC lcvcl 5 starch of‘positio~~ 547 (I:igurc 2-l): 0 tCrl~~iild1 IlOdCS - 6.6% B hash tab!c lookups - 11.8% o level 1 - 06.8% 0 Icvel 2 - 5.3% o lcvcl 3 - 6.6% 0 lcvc14 - 1.7% o Ic~cl 5 - 1.2% Clearly the level 1 schema is dominant in terms of positions cla\~lficd, but this is partly due to the fact that it iS the first to bc tried. Although lcvcl 5 accou;lts for only 1.2% of the positions classified, it produced an almost nine-fold spccdup in the search. A similar improbcmcnt also occurs at Icvcl 4 M ith an additional classification of only 1.7% of the po\itir)ns. ‘I‘hus, the ability to classify cvcn a small adtlitic.~n,ll pcrccntagc of positions can product very large savings in \c,lrch. ‘I‘hi\ lnust bc dcpcndcnl to some dcgrcc on how near the root of the tree the k~~owlcdgc can bc ,lpplicd. WC can only assume this is ~andorn: howc~cr. cnch prublcm clearly cvinccs a point whcrc it begins to bccomc tractable. and from then CJn the addition of knowlcdgc protluccs dramatic impro\cmcnts until what appears to bc a mi$mal search is hit. ‘I’hc above results wcrc obtained using the ‘automatic’ parse of posirions into two chunks of Kv3CPl’. which is the intcndcd thcmc for this scl Of ploblcmS. since WC wanted Lo invcstigatc altcrnatc position parsing?. wc ex:mincd the possibility of decomposing 3 position into one chunk of K3CPt’~ K, and another c?f 3ClJP unopposed. ‘I‘he K3CPPvK library contains the number of sp%rrc tempi the lone king has bcforc the pawns force promotion. If the king and pawns in cooperation can fur-cc a pawn through bcforc an ullopposcd pawn promotes in the other chunk, the po\ltion is classified as a win. ‘I’he choice of this altcrnatc chunk parsing is mddc ba\cd on both the result of the position under the u:,t1;11 dccompositictn, and dn cstimatc of the probability of success of such a strategy dctcrmincd from the piccc placamcnt. t:or cx~implc. if the position in 1:igurc 4- 1 is intcrprctcd in the usual way. it is clear that White is lost (see l.c\cl 2). RealiAng that the white pawns arc far advanced gives rlsc to the pobsibility of the altcrnatc parse. which leads to the C(JnC1llSion after a sh(;rt investigation that White can M in by joining his king to the pawns in an attack on the black king. Since the problem set was bolvcd correctly when chunk interaction is igno~ cd. adding intcractio~l to ihc I .cvcl 5 vcr!A)n of C I I ti N K I:R dots not sigiiilic,inlly al‘fcct any of tllc results or scarchcs of ihc tc5t positions. Only position 530 (aboul 5% mm nodes) ,~nd 547 (about 1% mm 110dc\) \howcd ;tny cf’fcct at iill. Of course in some‘ positions (that arc not in the text set, for cxamplc 1;igurc + 1). Lhc possibility of an allcrnntc parsing is csscntial in dctormining the correct solution. ‘I‘hc chess machine I~cllc [7] was run on positions 5.35, 539, 543 and 545. IlCllC is Cill>ilblc Of sc‘irchin 2 130.000- 160,000 IlOdCS ii SCCOlld. In the 3 positions icstcd. Ikllc. t‘lkin g on the order of 2.5 hours per pos~t!on, pl,l)cd ;I correct move in each Cxc. I lowcvcr only in one of the positions wits 19~11~ ;lblc to actually See that [he sclcctcd move forced iI &in (or A dram). I:I filet. WC cstinmtc that Ijcllc would rcquirc’ on the order of lOI gciirs Lo coinplctcly sol\c I-igurc 2- 1. I-or ;I inorc dct,lilcd discussion of I%cllc’s pcrtiJrmancc on ihcsc positions. see [2]. It should bc noted that. though Hcllc v,ould probably bc able to play correctly the 4 position\ prcscntcd to it. it is cxtrcmely doubtful if it Collld Correctly C\;lllliltC them if rhcy occur-red deep in it\ \c,trch tree. I’urthcr. in position 547 (Izigiirc 2-l) whcrc a smell misstep such as initi,ll P-QX3 (113) turns ;I winning pot,ltion into a losing one, it would scc!:n that 011Iy ;I “L1mHy’ Of wll4t thC ending iS all ilbollt, Such as gained by our schcmas. would \ufficc to play correctly. 52 ITigure 4-I: A position that requires an altcrn;\tc chunk parse I>uring the course of this rcsc‘uch. CHUNK t:l< discovcrcd errors in rhe solutionr [I] to problems 546 and 5-17. I II position 546 the kings do not have to oscill:ltc bctwccn the dcsignatcd \quarcs (f3/f4. cS/cG) to draw: both \idcs hil\c altcrnatc mcthotls ol‘dr,lwing. In position 547 1. I’-QR4 (‘14) does not win as st,~tcd in the book. It ,~ctually allows the clcvcr setting up of a % configuration, Lhc importance of which is clearly not irpprcci>ltcd by the authors. 5. Generalization 5.1. The analysis method While the schcmc of chunkcd knowlcdgc with propcrtics, cxploitcd by search, works cxtrcmcly well in the gi\ cn dom,lin, the question should bc asked as to how gcncral this schcmc is. In the gi\cn domain thcrc arc only three chunk tqpc~ ,111d the problem (Jf p;lrsing a board position into chunk\ is simplified due to the fact ~hn~ thcrc arc only three pocsiblc position decompositions. WC cxpcct to gcncrali/c the basic schcmc to the whole domain of king and pawn cncl~ngs. ‘1‘0 do this man) more chunk libr,lrics will bc rcqrlircd. and thcSc will rcquirc many new properties. New rcnsoning SchcnliiS that correspond to tllc needs of particular positions will bc dcvclopcd to USC the new propcrtics. WC con\idcr that a modcratc set of Such Schcm~s will suffice for this domain SO that it will not bc ncccssnry to h<lvc a gcncrcil reasoning cnginc. ‘I hc ultimntc analysis method in ClIUNKI:K will bc a starch through ;I set of alternative parsings of a pocition. rillhcr than the standnrd sc;rrch through a set of states of the domain. When :I parsing cloc5 not lend to ,m immcdi;lte evaluation, further sc‘lrching must bc done. as in the prcscnt work. Typically both playc~s will have rhc option of enforcing certain parsing5 on a position. ‘I‘hc search through altcrnatc parsings tcrminatcs when it is shown that one player can win (or draw) against any position decomposition chosen by the opponent. A dctailcd account of analysis/search methods nnd libr:trics can bc found in [2]. 5.2. Libraries WC cxpcct to build ;I pcrmancnt act 01’ libraries dc,lling with Configtir;ilioiir that arc frcqucntly cncoiintcrcd. ‘I hcsc would include typicill bnttlcc bct~\ecn ~>,IwI~ ~O~I~:ILIOII\. fiJr CX:I~~IC king ‘111d pawn is. king, or Tao pawns frlcing two. Rc~l,llions other’ th;in Thor used in the prcscnt work would inclutlctl number of mo\cs to c\t:lhlish a p~sscd pawn, S]lillT iiiocc4 ,iv,iil;iblc witliout losing In~ltCriill. invasion points for opposing king, etc. Solnc types of chunks tht coirld (JCClIr arc 40 complex and rare th‘it to build libr,irics for all l)o\slhilitics would rcquirc unrcali~tic;llly large ~nctno~ics. t:or such chunk\ (ccJnr,rining doublccl l);~wn\ for in<.t:rncc) it will bc ncccssary to do the ;In,llysi5 to prodtlcc propcrtics on the fly. I lowc\cr, once such nii ,Iii,llysis i\ done. thcsc propcrtics c,~n bc rctilillcd in ;I temporary chunk library IiJr ~hc dur,ltion of the solution process. 5.3. Search III tilt ~~rc‘cc‘nt tiotllaitl Cl I IINKIIR pcrlijrlrl\ ,~dccl~~;~tcly using a dcprl~ linlitcti clcptli-lir\t ,tlpha-\)ct,i ~,~rch ~4 it11 Ii;1~11-t,~l~lc s~ipporl. In <i IIIOI’C gcncr,11 C,I>C, the lurch iii,~y bc oricntcd I0 \pccific pt~rpchck 41.11 ted from \ ,irious ull,-trcck 2nd rc\t~ icr<d in li)rmat. It is L cxpcctcd tli,11 soinc li~rlll ofsc,ircl~ M ill 1x2 ncccsur! for nloct posithms. I lo\+c\cr. Iu)~,itioll\ very llkcly 1;1ll into I\LO cl,c~;\c~ 1) ‘I‘hosc whcrc it :~ppc’~~r\ l)os,i!>lc. to rn,lkc ;I clctcrnlin;ltion of tl~c sx;lct game-thcorctic value of t11c po\i!ion. alid ?) tlloc~ whcrc gcncr;~l pr i:lciplcs hnvc to be iniokeci to tilld ‘I likely \,~luc ,~nd ;I ~t~c~\t-~kcly-bc\t IIIOVC. III the first C;W. clltlllk \C~~CIII;I\ ;111d \C;IICII \IIo~II~I 1)~ ,II>Ic to II,III~~C the prolilcl?l. In the ~ccond c;l\c m c~aluat~on polyn:mi,ll in tllc propcrtics. in conjiinclion \\ ith ‘i sca~,cll will bc nzctlcd to sivc ;I direction to the solution procc\\ L\IKII it is not cxpectcd to reach <+n) known go;tl. ‘I‘his ~r~clllotl is ,11so ncccs\ary for dcfcnsicc play irl lo\ing po<,itions, since if all !osing positions arc classified as cqu,ll thcrc M ill bc no criterion for the tlcf’cnsc to put up it fight. 6. Summary and Conclusions WC habe prcscntcd ;I new mcthodolo;ry; (XC for allowing a program to nlanipulatc propcrtics of chunks found in chunk libraries in order to evaluate whole chess positions. While it has been known that humans use chunkcd knowlcdgc to come to grips with the complcxitics of a chess position, no cffcctibc method for doing this has previously been demonstrated. I’his is also apparently true for other domains of even modcrstc complexity. Or,c of the difficult& that “practical” AI systems that reason have had is thaL tic “facts” that rhcy reason from arc almost always ad hoc collictions input by the system dcsigncrs. This creates great difficulties both in maintaining the consistency of a set of “facts” and in producing adequate coverage of the domain in the face of the difficulties in zainrsining consistency. Ou: method avoids both these problems. Ry generatins our facts from an exhaustive search, we guarantee accuracy for the facts. ‘l’his makes the conclusions derived from them complctcly trusrworthy, and would. in principle, allow the building of several levels of reasoning upon such a structure.4 WC demonstrate a simple instance of our method on a subset of king and pawn endings in chess. The higher level of abstraction due to chunking provides a framework for powcrfu! reasoning schcmas in the domain. It is quite remarkable to set the effect of the additional concentration of knowlcdsc on the tractability of the problems in the test set. As additional facts and reasoning schemas arc invoked, even the extremely complex base position recedes to a point where one minute of CPU rime reveals all its mysteries. an 18 order of magnitude speed-up. WC have shed some light on the role that chunks have in problem solving. It was previously known that chunks are a way of breaking up i:naao into component parts. In a large domain, it is more reasonable to catalog parts than total images which will probably never be encountered again. However, the computational utility of chunking was not clear. In our method, chunks and the relations among them serve as data for a problem solver. While this combination may use more time than looking up of images, in a large domain there is no meaningful alternative. The nature of a chunk is apparently determined by funcfional considerations.’ These functional considerations define the information needed for the problem solving process. In the end, a chunk name becomes a slot to which properties and their values can be attached. 4 Whllc this method appears to be quite general, WC make no claim that it is ready for implementation in more than a few sclcct domains Clearly, there must be a way of dcfmmg useful chunk boundanes and a method of generating property values before the enterprise can be undertaken. ‘In hattlc the present case, a chunk is an assemblage required of pawns (possibly supported by king) versus king. to assess the outcome of a ‘I’hc incrliod c~ii hc cxtclldcd to more coniplic:;ltctl p;iun clidings 121. ‘I Iii\ in\ol\cs the crc<ltion ()I‘ ;&litic)n;ll clltlnk libr,11ics with new l,ropcrti~\, ,itid Ilic cxtcn\iori of’ the scdrcli Cupid rc,iv)riing r:lcLllodS to t‘lkc ,~cl\,~nt,~gc of IIWW. WC cstim,rrc LII‘IL the doln,~~n ~~111 bc covcrcd b4 itll 01) the 01dc1~ 01‘00 to 100 rcdwiiing ~cllcin,i~ and ,Il~l,l’oxim~ltcly 30 propcrtics Lliat would hc ciiiploycd in the schcmns. ‘I hc progr,ii1i that 11‘1s been dc~clopcd in tlic courcc of this rcscarch i\ the v,orld’\ forcmo\t cxpcrt in its rc\trIctc’d dom,lin ,rncl 11~1s found two corrcctioii\ in the cxijting litcraturc, iil~idc it po\Gblc to dc~clop a iicw lllc’orcin about its tloni,iin. and ,Illowcd lhc composition of a niiiiil)cr of ~orthwhilc .~dJlt~ons to lhc cxi\ting Iitcr4tiirc. I>cy)itc the fkr rh,lt both prcscnt ,Iuthors arc cxpcrt chess pl,l)crs and luvc llad quite A hit of cxpwrc to the donlClin of this study hth flom book\ and fioin rhc rcsiilt\ ofionipnting cxpcrimcnts, Cl IUNKIIR rcgiilarly 0iitpcrfi)rnis its creators in noccl sitii,ltions. ‘l‘his ,ru.gt~rs well for the promise of ~hc tcchniquc. Rcfe rences 1. Avcrbakh, Y. and Mai/clis. I. /‘tl~\.ll I:‘,rtlirrg,. IL’I‘. ILlt,ford I .td., 1974. 2. I3crlincr, I I. ilnd C~~mpbcll. M. Using chunking to cndg‘mcs. Cnrnegic-Mellon University, April, 1983. 3. Br‘lnlcr, M. A. Colnputcr-gcncr’atcd dat&lscs for the cndgamc in chchs. l‘hc Open University, October, 1978. 4. Ch,~sc. W. G. and Simon, I I. A. “Pcrccption in Chcn.” Cognitive /‘~J’c’h/Og~’ 4. I (J‘muary 1973). 5. Clusc, W. G. and Simon, I I. A. ‘1%~ Mind\ I-tyc in Chess. In X/l1 /Irl/rlrtr/ /'sJdrologJ~ .sJw?po.snm l'o/um: I'ixral l/lfi~r-rmllion P:wcrsritrg. Chase. W. G.. F.d.,Acndcmic Press, 1973. ch. 8, pp. 2152Sl. 6. CLlrkc. M. I<. 13. A Qu,tntitdti\c Study of King and Pawn against King. In At7’1~7rms i/l C‘or,lpr/cr (‘IICSS I, Clarke, M. I<. IL Ed..lldinburgh University Press, 1977. 7. Condon. J. Il. and ‘l‘hompson, K. I3cllc Chcs\ Hnrdwarc. In /lrhn/~rs i/l (‘orl~,rju/o. C‘llcss 3, Clarke, M. Ii. IL. l~d.,I’crg~~mmon Press, 1982. 8. Marsland, ‘I’. A. and CLmpbcll, hl. “Par,\llcl SGIICII of Strongly Ordcrcd Game ‘I’rccs.” Co?~lplr/itlg S’un~e~~s 14, 4 (I)cccmbcr 1982), 533-551. 9. Simon, H. A., and Gilmartin, K. “A Simulation of Memory for Chess Positions.” Cogrzilive l’sychology 5 (1974), 29-46. 10. Slnglc, J. and Dixon, J. “Expcrimcnts with some programs that sca~ch game trees.” .//t(‘nl 2 (April 1969), 189-207. 11. Sl,itc. 11. and Atkin, I.. CflllSS 4.5 -‘I hc Northucstcrn University chess program. In C’1rrs.s .\‘k//l irl ,Al~f/l nild AInclri/w, I-rcy, P., Ed.,Springcr-Vcrlag, 1977, ch. 4. 53
1983
52
247
KRYPTON: Integrating Terminology and Assertion Ronald J. Brachman Fairchild Laboratory for Artificial Intelligence Research Richard E. Fikes Xerox Palo Alto Research Center Hector J. Levesque Fairchild Laboratory for Artificial Intelligence Research Abstract The demands placed on a knowledge representation scheme by a knowl- edge-based system are generally not all met by any of today’s can- didates. Representation languages based on frames or semantic net- works have intuitive appeal for forming descriptions but tend to have severely limited assertional power, and are often fraught with am- biguous readings. Those based on first-order logic are less limited assertionally, but are restricted to primitive, unrelated terms. We have attempted to overcome these limitations in a new, hybrid knowledge representation system, called “KRYPTON”. KRYPTON has two rep- resentation languages, a frame-based one for forming domain-specific descriptive terms and a logic-based one for making statements about the world. We here summarize the two languages, a functional inter- face to the system, and an implementation in terms of a taxonomy of frames and its interaction with a first-order theorem prover. $1 Introduction Each of the currently predominant styles of knowledge represen- tation seems less than perfect when confronted with the wide range of tasks required of a representation system 121. For example, frame- like structures are quite popular, but are often imprecisely specified, have limited assertional capabilities (e.g., they tend to be limited to ‘instantiation’ and defaults), and tend to allow accidental assertions associated with the mere presence or absence of data structures [l]. Logic-based systems are usually more precise and address a much wider range of assertional capabilities, but are limited to primitive, indepen- dent predicates; this limitation makes it impossible to syntactically compose new predicates out of old ones. A productive view would seem to embrace both styles of repre- sentation in an attempt to capitalize on the strong points of each. Over the past year, we have been designing and implementing an experimen- tal knowledge representation system-called “KRYPTON’‘-with ex- actly this goal in mind. While the general idea of a hybrid system is not new, KRYPTON is not typical of the predicate calculus/semantic net marriages attempted before. It does not encode logical assertions in network form (as in 14)) nor does it use simultaneous predicate calculus and network rep- resentations of the same facts (as in [3] and [12]). Instead, KRYPTON distinguishes between terminological structure, expressed in a frame- like taxonomic style, and assertion, expressed in a form of predicate calculus. This distinction yields two main components for our repre- sentation system: a terminological one (or ‘TBox’) and an assertional one (or ‘ABox’). The TBox allows us to establish taxonomies of struc- tured terms and answer questions about analytical relationships among these terms; the ABox allows us to build descriptive theories of domains of interest and to answer questions about those domains. Given its division of representational labor, KRYPTON can afford to take a strict view of the constructs in the two languages. The ex- pressions in the TBox language are used as structured descriptions, and have no direct assertional import. Moreover, the ABox language is used strictly for assertions and even universally quantified bi-conditionals have no special definitional import. In what follows, we will describe in more detail the TBox and ABox languages, the operations that are available in KRYPTON and how these operations are being imple- mented. $2 Two languages for representation The two components in KRYPTON reflect the two kinds of expres- sions it uses to represent knowledge-(nominal) terms and sentences. The TBox contains the formal equivalent of indefinite noun phrases such as “a person with at least 3 children”, and understands that this expression is subsumed by (the formal version of) “a person with at least 1 child”, and is disjoint from “a person with at most 1 child”. The subsumption and disjointness relationships among these terms are based only on their structure and not on any (domain-dependent) facts. The ABox, on the other hand, operates with the formal equivalent of sentences such as “Every person with at least 3 children owns a car”, and understand the implications (in the logical sense) of an assertion such as this one. Here we review the formal constructs that make up the TBox and ABox languages. 2.1 The language of the terminological component Our TBox language attempts to capture the essence of frames with- in a compositional and strictly definitional framework, without the ambiguities and possible misinterpretations common in existing frame languages.’ In particular, the TBox supports two types of expressions: Concept expressions, which correspond roughly to frames, and Role expressions, the counterparts of slots. Thus, the language is defined by a small set of Concept- and Role-forming operators. In general, Concepts and Roles are formed by combining or restrict- ing other Concepts and Roles. For example, the language includes an ‘The TBox is, in large part, a distillation of KL-ONE [13], which accounts for our use below of terms like “value restriction” and “differentiation”. 31 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. operator ConjGenerIc (‘conjoined generic’). which takes auy num- ber of Concepts, and forms the Concept corresponding to their con- junction. This operator could be used to define the symbol2 bachelor by assigning it the expression (ConjGeneric unmarried-person man) (assuming that the symbols unmarried-person and man had appropriate definitions as Concepts). Concepts can also be formed by restricting other Concepts using Roles. For example, KRYPTON has a VRGeneric (‘value-restricted generic’) operator that takes a Concept cl, a Role r, and a Concept cz, and yields the term meaning “a cl all of whose r’s are Q’S”, as in (VRGeneric person child bachelor) for “a person all of whose children are bachelors”. The language also has an NRGeneric (‘number-restric- ted generic’) operator that restricts the cardinality of the set of fillers for a given Role, as in (NRGeneric person child 1 3) for “a person with at least 1 and not more than 3 children”. Roles, like Concepts, can be defined as specializations of other Roles. One basic Role specialization operator VRDWRoie (‘value-restricted differentiation’), takes a Role r and a Concept c, and defines the deriva- tive Role corresponding to the phrase “an r that is a c”. Thus, this operator would be appropriate for defining son, given already defined terms child (a Role) and man (a Concept), as (VRDiffRole child man). Another Role-forming operator, RoleChain, allows Roles to be func- tionally composed so that (RoleChain parent brother-in-law) could be used as the definition of uncle. All of the term-forming operators can be composed in the obvious way, as in, for example, (VRGeneric (ConjGeneric unmarried-person man) (VRDiffRole sibling man) (NRGeneric person (RoleChain child child) 1 co)). This expression can be read as “a bachelor whose brothers have grand- children” or, more literally, “an unmarried person and a man all of whose siblings that are men are persons whose children have, among them, between 1 and co children”. In many domains one wants to be able to give necessary but not sufficient conditions for a definition. To this end, KRYPTON includes facilities for specifying ‘only-if’ definitions. The PrimGeneric and PrimRole operators are used to form primitive specializations of a Concept or Role. A primitive Concept is one that is subsumed by its superconcept, but where no sufficient conditions are given for deter- mining if something is described by it. However, two primitive sub- Concepts of a Concept are not by definition disjoint nor do they neces- sarily exhaust that Concept. To introduce a pair of Concepts (or Roles) that do have these properties, the KRYPTON decomposition operators (DecompGeneric or DecompRole) would be used. To see how some of the TBox operators relate to a more traditional language of frames, consider the following description of a family (in a hypothetical ‘framese’): 2As we will describe below, expressions can be assigned as definitions to atomic symbols. This use of defined symbols, however, is purely for the convenience of the user. faam i!Jr : lea social-structure pi Taking this frame as a description of “a social structure with, among other things, a father who is a man, a mother who is a woman, and some number of children, all persons”, we might express it in KRYP- TON as (PrimGeneric (ConjGeneric (NRGeneric (VRGeneric social-structure father man) father 1 1) (NRGeneric (VRGeneric social-structure mother woman) mother 1 1) (VRGeneric social-structure child person))). We have considered a wide range of operators for the TBox lan- guage; the principal ones in the current version are summarized in Table 1. Expression Interpretation Description Concepts: (ConjGeneric cl . . . e,) “a el and . . . and a c,,” conjunction (VRGeneric cl r ~2) “a er any r of which is a es* value restriction (NRGeneric e r nl rz2) “a c with between nr and n2 r’s” number restriction (PrimGeneric c i) “a e of the i-th kind” primitive Concept (DecompGeneric c ii “a c of the i-th type from the decomposition disjoint?) j-th [disjoint] decomposition” Roles: (VRDiflRole c r) “an r that is a c” differentiation (RoleChain rl . . . r,) Uan rn of . . . of an rrn composition (PrimRole r i) “an r of the k-th kind” primitive Role (DecompRole r i J’ “an r of the Cth type from the decomposition die joint?) j-th [disjoint] decomposition” T&e 1. The TBox Language. 2.2 The language of the assertional component As with the expressions of the TBox language, the sentences of the ABox language are constructed compositionally from simpler ones. The concerns behind the choice of sentence-forming operators, however, are quite different from those motivating the ones for forming TBox terms. As discussed in [2], the issue of expressive power in an assertional rep- resentation language is really the issue of the extent to which incom- plete knowledge can be represented. Moreover, this issue can be seen to motivate the standard logical sentential constructs of disjunction, negation and existential quantification (see also 191). So to provide the ability to deal systematically with incomplete knowledge and to com- pensate for the fact that the TBox has been purged of any assertional ability, our ABox language is structured compositionally like a first or- der predicate calculus language. In other words, the sentence-forming operators are the usual ones: Not, Or, ThereExists, and so on. The major difference between our ABox language and a standard first order logical one lies in the atomic sentences. The non-logical symbols of a standard logical language-that is, the predicate symbols (and function symbols, if any)-are taken to be independent, primitive, 32 domain-dependent terms. In our case, there is already a facility for specifying a collection of domain-dependent terms, namely the TBox. Our approach, therefore, is to make the non-logical symbols of the ABox language be the terms of the TBox language. As observed by Hayes [5] and Nilsson [lo], when the language of frames and slots is ‘translated’ by ABox: TELL: KB >( SENTENCE --c KB Sentence is true. ASK: KB x SENTENCE + {yes, no, unknown} 18 sentence true? into predicate calculus, the frames and slots become one- and two- place predicates respectively. The main difference between what they are suggesting and what we have done is that our predicates are not primitive but are definitionally related to each other (independent of any theory expressed in the ABox language).3 As for the TBox, the TELL operation takes a symbol and associates it with a TBox term (Concept or Role expression). The effect is to change the knowledge base into one whose vocabulary includes the symbol, defined by the term. We have focused on two ASKoperations: the first asks whether one TBox term subsumes another, and the second whether one TBox term is conceptually disjoint from another. Schematically, this gives us $3 Operations on the components TBox: TELL: KB X SYMBOL X TERM + KB Overall, the structure of KRYPTON can be visualized as in Figure 1: a TBox of roughly KL-ONGish terms organized taxonomically, an ABox of roughly first-order sentences whose predicates come from the TBox, and a symbol table maintaining the names of the TBox terms so that a user can refer to them. However, this is a somewhat misleading picture since it suggests that users can manipulate these structures directly. In fact, a user does not get access to either a network in the TBox or to a collection of sentences in the ABox. What a user does get instead is a fixed set of operations over the TBox and ABox languages. All interactions between a user and a KRYPTON knowledge base are mediated by these operations. By symbol, I mean term. ASKI: KB x TERM X TERM + {yes, no} Doea term1 subsume term27 ASK:!: KB x TERM x TERM -+ {yes, no} Is terrnl disjoint Jrom term27 The TBox ASK operations allow a user to inquire about the meaning of the domain-dependent terms being used (without also folding in what is known about the world). In addition, the ABox uses these operations to understand the non-logical terms in the sentences it processes. Of course there have to be additional ASK operations on a knowl- edge base. For instance, the ones that we have mentioned so far provide no way of getting other than a yes/no answer. In the case of the ABox, we have to be able to find out what individuals have a given property; in the TBox, there has to be some way of getting the information from the definitions that is not provided by the subsumption and disJointness operations (e.g., the fact that the number of angles of a triangle is TI!OX necessarily 3). It is important to stress that the service provided by KRYPTON as a knowledge representation system is completely specified by a definition of the TELL and ASK operations. In particular, the notions of a taxonomy or a set of first-order clauses in normal form are not part of the interface provided by the system 4 The actual symbolic structures used by KRYPTON to realize the TELL and ASK operations are not available to the user. While it might be useful to think of a knowledge base as structured in a certain way, this structure can only be inferred from the system’s behavior. One might consider new operations that allow finer-grained distinctions to be made among knowledge bases,5 allowing even more of its structure to be deduced, but again it will be the operations that count, not the data structures used to implement them. One interesting property of this approach to knowledge representa- tion is that there is a difference conceptually between what a system can be told (involving expressions in the language used as arguments to TELL) and what it has to actually remember (involving data struc- tures in the implementation language). This allows us to consider an 41n [s], we present a definition of TELL and ASK that is completely independent of how the knowledge is represented in a knowledge base and is based instead on the set of worlds that are compatible with what is known. ‘One possibility, for example, is an ABox ASK operator that only performs some form of limited inference over what is known. Figure 1. KRYPTON overview. The operations on a KRYPTON knowledge base can be divided into two groups: the TELL operations, used to augment a knowledge base, and the ASK operations, used to extract information. In either case, the operation can be definitional or assertional. In terms of the ABox, the TELL operation takes an ABox sentence and asserts that it is true. The effect, roughly speaking, is to change the knowledge base into one whose theory of the world implies that sentence. The corresponding ASK operation takes a sentence and asks if it is true. The result is determined on the basis of the current theory held by the knowledge base and the vocabulary used in the sentence, as defined in the TBox. Schematically, we can describe these operations 3The status of function symbols and predicate symbols with more than two argu- ments is unclear at present. There is no problem incorporating them as primitive8 into the ABox language; the issue is what facilities there should be for relating them definitionally to other terms in the TBox. 33 interface notation that is, in some sense, more expressive than the im- plementation one, provided that the diflerence can be accounted for in in efficiency over simply asserting the meaning postulates as aMoms~T To this end, we are developing extensions of standard inference rules that take into account dependencies among predicates derivable from TBox definitions. the translation. In [S], a modal ‘auto-epistemic’ language is used to supply arguments to TELL and yet the resulting knowledge is always represented in first-order terms. For example, the reasoning of a standard resolution theorem prover depends on noticing that an occurrence of @(z) in one clause is incon- sistent with 1 $(z) in another. Given that realization, the two clauses can be used to infer a resolvent clause. The scope of this inference $4 Building KRYPTON Having discussed the desired functionality of the KRYPTON knowl- edge representation system, we now sketch in general terms how we are building a system with these capabilitites. 4.1 Making an ABox The first thing to notice about an implementation of the ABox is that because of the expressive power of the assertional language, very general reasoning strategies will be needed to answer questions. Specifically, we cannot limit ourselves to the special-purpose methods typical of frame-based representation systems. For example, to find out if there is a cow in the field, it will not be sufficient to locate a representational object standing for it (i.e., an instantiation of the cow-in-the-field Concept), since, among other things, we may not know all the cows or even how many there are. Yet, we may very well have been told that Smith owns nothing but cows and that at least one of his animals has escaped into the field. rule can be expanded by using subsumption and disjointness informa- tion from the TBox as an additional means of recognizing the inconsis- tency of two literals.8 That is, since triangle and rectangle are disjoint, triangle(z) and rectangle(z) are inconsistent; and since polygon sub- sumes rectangle, ‘polygon(z) and rectangle(z) are inconsistent. The situation is complicated by the fact that TBox definitions also imply ‘conditional’ inconsistencies. For example, assume that rectangle has been defined as (VRGenerIc polygon angle right-angle). The literal polygon(z) is inconsistent with -rectangle(z) only when all the angles of 5 are right angles. In such cases, the clauses containing the condition- ally inconsistent literals can still be resolved provided that we include the negation of the condition in the resolvent. Thus, if the TBox is asked whether polygon is disjoint from -rectangle, it should answer, in effect, “only when all the angles are right angles”. 4.2 Making a TBox Since we take the point of view that an ABox reasoner has to be able to access TBox subsumption and disjointness information between steps in a deduction, we have to be very careful about how long it takes to compute that information. Absolutely nothing will be gained by our implementation strategy if the TBox operations are as hard as theorem proving; we could just as well have gone the meaning postulate route. We are taking three steps to ensure that the TBox operations can be performed reasonably quickly with respect to the ABox. The second point worth noticing about the ABox is that if the predicate symbols of the ABox language are indeed TBox terms, then the ABox reasoner needs to have access to the TBox definitions of those terms. For example, once told that Elsie is a cow, the ABox should know that Elsie is an animal and is not a bull. However, these facts are not logical consequences of the first one since they depend on how the Concept cow is defined in the TBox. In general, the issue here is that the ABox predicates are not simply unconnected primitives (as in first- The first and perhaps most important limit on the TBox operations is provided by our TBox language itself. One can imagine wanting a language that would allow arbitrary ‘lambda-definable’ predicates to be order logic), so that if techniques, we have to TBox. we want to use standard first-order reasoning somehow make the connections implied by the specified. The trouble is that no complete algorithm for subsumption would then be possible, much less an efficient one. By restricting our TBox language to what might be called the ‘frame-definable’ predicates (in terms of operators like those we have already discussed-see Table l), we stand at least a chance of getting a usable algorithm while providing a set of term-forming facilities that have been found useful in AI applications. Conceptually, the simplest way to make the TBox-ABox connection is to cause the act of defining a term in the TBox to assert a sentence in the ABox, and then to perform standard first-order reasoning over the resultant expanded theory. For example, after defining the Concept cow we could automatically assert sentences saying that every cow is an animal and that cows are not bulls, as if these were observed facts about the world. As far as the ABox is concerned, the definition of a term would be no more than the assertion of a ‘meaning postulate’.6 The situation is far from resolved, however. The computational complexity of term subsumption seems to be very sensitive to the choice of term-forming operators. For example, it appears that given our TBox language without the VRDifITtole operator, the term subsumption algorithm will be O(n”) at worst; with the VRDifIRole operator, however, the problem is as difficult as propositional theorem proving I? ‘There are already precedents in the theorem-proving literature (see [8] and Ill]) for using special information about subsumption and disjointness of predicates as a way of guiding a proof procedure. 8See [IS] f or a similar approach to augmenting resolution by ‘building-in’ a theory. In some sense, this would yield a ‘hybrid’ system like the kind discussed in [3] and 1121, since we would have two notations stating the same set of facts. Our goal, however, is to develop an ABox reasoner that avoids such redundancies, maintains the distinction between def- initional and assertional information, and provides a significant gain %deed, by far the most common rendering of definitions in systems based on first- order logic is as assertions of a certain form (universally quantilied bi-conditionals), a treatment which fails to distinguish them from the more arbitrary facts that happen to have the same logical form. As a second step towards fulfilling this efficiency requirement ffor the TBox, we have adopted a caching scheme in which we store sub- sumption relationships for symbols defined by the user in a tree-like data structure. We, in effect, are maintaining an explicit taxonomy of the defined symbols. We are also developing methods for extending this cache to include both absolute and conditional disjointness information about TBox terms. The key open question regarding these extensions is how to determine a useful subset of the large number of possible conditional relationships that could be defined between the symbols. As a final step towards an efficient TBox, we have adopted the no- tion of a classifier, much like the one present in KL-ONE [14], wherein a background process sequentially determines the subsumption relation- ship between the new symbol and each symbol for which it is still unknown. Because the taxonomy reflects a partial-ordering, we can incrementally move the symbol down towards its correct position. The overall effect of this classification scheme is that the symbol taxonomy slowly becomes more and more informed about the relationship of a symbol to all the other defined symbols. One very important thing to notice about this implementation stra- tegy based on a taxonomy and classification is that it is precisely that-an implementation strategy. The meaning of the TBox language and the definition of the TBox operators do not depend at all on the taxonomy or on how well the classifier is doing at some point. g5 Conclusion The KRYPTON system represents an attempt to integrate frame- based and logical facilities in such a way as to minimize the disad- vantages of each. To do this, KRYPTON separates the representation task into two distinct components: a terminological and an assertional one. The terminological component supports the formation of struc- tured descriptions organized taxonomically; the assertional Lomponent allows these descriptions to be used to characterize some domain of interest. The terminological component is a distilled version of frames, characterized by a set of compositional term-forming operators, each with clear import. The assertional component utilizes the power of standard first-order logic for expressing incomplete knowledge. These two components interact through a functionally-specified interface; the user has no access to the structures used to implement it. An implementation of a KRYPTON system in Interlisp-D is under- way. As of this writing, we have implemented the operations of the terminological component using the taxonomy/classification methodol- ogy discussed above, and are currently investigating its interaction with a version of the theorem-prover described in 1151. References [l] Brachman,R. J., Fikes, R.E., andLevesque, H. J., “KRYPTON: A Functional Approach to Representation System,” to appear in IEEE Computer, September, 1983. PI Charniak, E., “A Common Representation for ProbIem-Sola ing and Language-Comprehension Information,” Artificial Intelli- gence 16, 3 (1981), 225-255. [4] Fikes, R., and Hendrix, G., “A Network-Based Knowledge Representation and its Natural Deduction System,” in Proc. IJCAI-77, Cambridge, MA, 1977, 23.5-246. 151 Hayes, P. J., “The Logic of Frames, n in Frame Conceptions and Text Understanding, Metzing, D. (ed.), Walter de Gruyter and Co., Berlin, 1979, 46-61. I61 Levesque, H. J., “A Formal Treatment of Incomplete Knowledge Bases,” Ph. D. thesis, Dept. of Computer Science, University of Toronto, 1981. Also available as FLAIR Technical Report No. 3, Fairchild Laboratory for Artificial Intelligence Research, Palo Alto, CA, February, 1982. PI Levesque, H. J., “Some Results on the Complexity of Subsump- tion in a Frame-based Language,” in preparation, 1983. 181 McSkimin, J. R., and Minker, J., “A Predicate Calculus Based Semantic Network for Deductive Searching,” in Associative Networks: Representation and Use of Knowledge by Computers, Findler, N. V. (ed.), A ca d emit Press, New York, 1979, 205238. PI Moore, R. C., “The Role of Logic in Knowledge Representation and Commonsense Reasoning,” in Proc. AAAI-82, Pittsburgh, 1982, 428433. [lo] Nilsson, N. J., Principles of Artificial Intelligence, Tioga Publishing Co., Palo Alto, CA, 1980. Ill1 Reiter, R., “Equality and Domain Closure Databases,” JACM 27, 2 (1980), 235-249. in First-Order [12] Rich, C., “Knowledge Representation Languages and Predicate Calculus: How to Have Your Cake and Eat it Too,” in Proc. AAAI-82, Pittsburgh, 1982, 193196. 1131 Schmolze, J. G., and Brachmnn, R. J., eds., “Proceedings of the Second KL-ONE Workshop,” FLAIR Technical Report No. 4, Fairchild Laboratory for Artificial Intelligence Research, Palo Alto, CA, May, 1982. [14] Schmolze, J. G., and Lipkis, T. A., “Classification in the KL- ONE Knowledge Representation System,” in Proc. IJCAI-83, Karlsruhe, W. Germany, 1983. 1151 Stickel, M. E., “A Nonclausal Connection-Graph Resolution Theorem-Proving Program,” in Proc. AAAI-82, Pittsburgh, 1982, 229-233. PI Stickel, M. E., “Theory Resolution: Building-In Nonequational Theories,” in Proc. AAAI-83, Washington. D. C., 1983. PI Brachman, R. J., and Levesque, H. J., “Competence in Knowledge Representation,” in Proc. AAAI-82, Pittsburgh, 1982, 189192. 35
1983
53
248
A THEOREM-PROVER FOR A DECIDABLE SUBSET OF DEFAULT LOGIC Philippe BESNARD - Rene QUINIOU - Patrice QUINTON IRISA - INRIA Rennes Campus de Beaulieu 35042 RENNES Cedex FRANCE Abstract II DEFAULT LOGIC Non-monotonic logic is an attempt to take into account such notions as incomplete knowledge and theory evolution. However the decidable theorem-prover issue has been so far unexplored. We propose such a theorem-prover for default logic with a restriction on the first-order formulae it deals with. This theorem-prover is based on the generalisation of a resolution technique named saturation, which was initially designed to test the consistency of a set of first-order formulae. We have proved that our algorithm is complete and that it always terminates for the selected subset of first-order formulae. I INTRODUCTION Decision-making is a fundamental feature of some fields of A.I., but it cannot always be supported by a complete world knowledge. Non-monotonic logic [Al 801 is an attempt to remedy to this fact. However, the present realisations in the field of non-monotonic logic avoid the decidability problem by use of heuristics. To apprehend this problem, we have selected a solvable class, the predefinite variable clauses, which determines a decidable subset in the context of default logic. Default logic [Rei 801 allows the inference of consistent though unvalid formulae in a first-order theory. The consistency of a set of formulae, which is a crucial problem in default logic, can nicely be solved by use of a particular resolution technique named the saturation. First of all, we recall the principles of default logic and then focus on an attractive default class, free-defaults, which allow more plausible inferences. Part III is devoted to the presentation of a special resolution technique named the saturation. We also propose a generalisation of saturation which constitutes the heart of our default theorem-prover presented in part IV. This default prover is complete and always terminates. First we introduce the default logic as proposed by Reiter in [Rei 801 and the proof-procedure he has set up. Next, we study a default subset, the free-defaults, which allow additional plausible inferences. A default rewriting rule can be applied to ordinary defaults to obtain free-defaults. A Definition of default logic The main goal of default logic is to provide some facilities to represent common-sense knowledge. For example, we know that generally “a bird can fly” (except penguins, ostriches..) This represented as the default 6: bird(x) :Mfly(x) fly(x) knowledge can be bird(x) is the prerequisite of the default and fly(x) the consequent (noted CONS(G)). This default is interpreted as: “if x is a bird and if it is consistent to suppose that x can fly, then infer that x can fly”. So defaults are patterns of inference schemata as “until a proof of the contrary, assume...” That is. a default denotes a plausible inference. for which a conclusion cannot be derived by use of classical inference rules. A default theory is a couple (1I.T) where A is a set of defaults and T a consistent first-order theory. The use of the defaults of A extends the theory T by building several consistent first-order theories called extensions, each of which contains T. Example 1: T=( A , -6 v “C ) A={ A :MB , A :MC , C :MD B C D ] This default theory has 2 extensions: E,=Th( A , -B v “C , B 1 E2=Th( A , “B v “C , C , D 1 27 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Definition: A formula Q has a default proof with respect We have proved the following result [ Bes 831: to a default theory (A,T) iff there exists a finite sequence Ao. Al. . . . . Ak of finite subsets of A, such that (1) T U CONS(As) I- Q Theorem 1: for any extension E of a default theory (A.T). there exists an extension E’ of (f(A),T) such that E c E’ (2) For lbi<k T U CONS(Ai) I- Prerequisites(Ai-,I (3) Ak=O III SATURATION (4) T u ( u CONS( is satisfiable. (2) expresses that the prerequisite of a default must be true to let the consequent of this default to be in the extension. In this chapter we first introduce a kind of resolution technique named the saturation. Next, we present the saturation by sets. a generalisation of the saturation which will be useful in the definition of our default proof-procedure. Reiter has shown the completeness result for default proofs, i.e. that a formula Q has a default proof with respect to (A,T) iff there exists an extension of (A.T) which contains Q. However (1) and (4) show clearlv that A Definition of the saturation The saturation [ Bos 81 ] is a particular resolution technique, decidable for predefinite variable clauses. A , the extension membership problem is not even semi-decidable. This leads us to consider a subset of first order logic, for which the decision problem is resolved. 6 Defaults without prerequisite or free-defaults clause is a predefinite variable clause if: - the only function symbols that occur in the clause are constant symbols. -- all the variables occuring in a positive literal occur also in a negative one. This restriction is acceptable for an application since Defaults may have counter-intuitive behaviours ([ Rei 811). In addition to those already cited, we present below an example of such a default: T={ “fly(Max) ) and 6= bird(x) :Mfly(x) fly(x) - in the context of databases, formulae are function--free - all the clauses without function symbol. in which the type of all the variables is defined by a predicate, are prodefinite clauses. The only extension of the default theory ({S] ,T) is T itself. It doesn’t contain “bird(Max), that we expect to be true, because Max doesn’t fly while generally birds do. For example, the clause C(x 1’ *.., Xn) can be transformed in the predefinite variable clause Type,(x,) a...& Typen(xn) => C(x,, . . . . x,1. :Mbird(x)=>fly(x) ( ~ :M(x) bird(x)=>fly(x)) Definition: a set C of predefinite variable clauses is Consider 6’= bird(x)=>fly(x) (x1 bird(x)=>fly(x) saturated if each resolvant of 2 clauses of C either is 6’ leads to a unique extension a tautology or is subsumed by a clause of C. E’=Th(“fly(Max),bird(Max)=>fly(Max)) and E’ contains Theorem 2: a saturated set of predefinite variable “bird(Max). If later the formula bird(Max) & “fly(Max) clauses is inconsistent iff it contains the empty clause. becomes true, then the default 6’ instantiated by Max cannot be used and “bird(Max) is not true in the new default theory (6’. T U { bird(Max)}) The default without prerequisite 6’ expresses a permanent knowledge in opposite to the situational knowledge expressed by 6. By 6’ we always know that if x is a bird then he flies. By 6 only when x is a bird we know that he flies. We now generalise the previous transformation: 6= P(x) :MR(x) R(x) I---> f(6) is called a free-default. :MP(x)=>R(x) f(6)- P(x)=>R(x) The saturation of a set of predefinite variable clauses produces, by resolution, a saturated logically equivalent set of predefinite variable clauses. Due to the previous theorem. the saturation gives a means to test the satisfiability of a set of predefinite variable clauses. The principle of saturation is to produce, first. all the resolvants one parent of which is the most recently produced clause. The tautologies and the clauses subsumed by one of their ancestors are not produced. A-ordering [ Kow 691 for the literals and resolution upon the leftmost literal permit to restrict the number of inferences. Example 2: saturation of F IV DEFAULT PROOF BY SATURATION F= ( -C v E , B v C , -6 v D , A v C , - c = -C v E 1 C =BvC 2 c = -B v D 3 C = rtc 4 2, c3)= C v D C 5 = rtc cd)= D v E 1' C =AvC 6 c = -A 7 C 8 = rtc c7)= C 6' C 9 = rtc cs)= E 1' The set (cl, . . . . cg } is saturated. has not been produced then the set A I The empty clause F is consistent, B Saturation by sets It is obvious that a clause subsumed by a clause produced later in the saturation process is useless. SO is the clause c subsumed by c in example 2. It is possible to reduie the number of qhese useless clauses by relaxing the order of resolvant production. To reach this goal we have extended the notion of saturation. Let E and F be sets of predefinite variable clauses. The saturation of F on E, produces the set noted S(E,F) recursively defined by S(E.B)=E S(E.F)=S( E U (c} , F’ - (c} 1 where .c E F .r E: F’ if *either r E F *or r is not subsumed by a clause of E and r is the resolvant upon the leftmost literal of c with a clause of E. This process is called saturation by sets. Example 3: E=( A v B } and F={ “A } S(E,F)=S( { A v B } , ( -A } 1 =S( ( A v B , -A ) , ( B ) 1 =S( ( A v B , -A , B ) , 0 1 =( A v B , -A, B ) Theorem 3: S(E,F) is finite iff E and F are finite. Theorem 4: S(E,F) is saturated if E is saturated. Corollary 5: S(0.C) equivalent to C. is a saturated logically A The default prover We now only consider default theories (A. T) where A is a free-default set. Let A’ be a subset of A and Q the formula to be proved. Remark that Prerequisites(A’)=0 thus (2’) T U CONS(A”) I- Prerequisites(6’) (3’) A” = 0 It is worth-noting that the steps (2) and (3) of the default proof (§ II) are immediatly verified. To solve (1) and (4) of the default proof. we saturate T U CONEi and T U CONS(A’) U (-Q}. By use of the saturation by sets, these steps can be joined because S(0, T U CONS(A’) U ( “Q)) I =I SG(0, T U CONS(A’)), C-Q), In this way no clause is computed twice. In addition, during the elaboration of S(0. T U CONSCA') U ( “Q]) the clauses of S(0, T U CONS( are computed first. So if the empty clause belongs to S(0. T U CONS( then T U CONS(A’) is unsatisfiable and the saturation process is stopped (because no default proof can be found, given that (4) cannot be true). Then it is useless to test if Q is a consequence of T U CONS(A’). During the search of a refutation of T U CONEi U ( -Qj we have discovered one of T U CONS(A’). This never happens with linear resolution, for example. Let A={61, . . . . 6n}. The proof procedure evaluates the consistency of T U CONStA’) for A’=0, A’=( 61 ), A’={s2}, A’={61, 62}... (A’ ranges over the set of the subsets of A). It then computes successively S(0. T). S(0. T U CONS(G1)). S(0. T U CONS(62)), S(0. T U CONS(G1. 62))... The preceding remark remains true, so: S(0. T U CONS(6 1)) I = I SG(0.T). CONS(G1)) S(0, T U CONS(G2)) I = I S(S(0.T). CONS(B2)) S(0. T U CONS(G1. 62)) I = I S(S(0, T U CONS(GIH, CONS(G2)) If for example S(0. T U CONS( contains the empty clause then, for every A’ subset of A containing 6i, S(0. T U CONS(A’)) also contains the empty clause and it is useless to compute it. Due to this optimisation, only the consistent or minimally inconsistent sets T U CONEi are computed. By operating like this, a default proof is nothing but a global saturation, S(0. T U CONS(A) U (-Q}). knowing that only some subsets of these clauses are really V CONCLUSION computed. Note also that S(0. T U CONS(A) U {“Q}, I=I S(S(0. T U CONS(A)), {“Q}) S(0, T U CONS(A)) can then be computed once and for all and remains the same for every query. The saturation by sets provides a kind of compilation of the extensions of a default theory. Theorem 6: the prover is complete. Theorem 7: the prover always terminates. B Example of a default proof by saturation T={ C v D, E] 6 =:M-C v “E 1 “C v “E 6 -:M-D v “E 2- “D v “E 6 =:MB v “E 3 B v “E The default theory ({ 6 1, 62, 6 3 } , T) has 2 extensions: El=Th(T U CONS(G1, 6s)) E2=Th(T U CONS(G2, 6s)) This default theory is submitted to the prover: S1=S(O. T)={ C v D , E } S2=SW0. T), CONS( =S1 U { “C v “E , D v “E } S3=S(S(0, T), CONS(B2)) = S1 U { “D v “E } S =S(S(S(B, T), CONS(G1)), CONS<6 1) 4 2 =S2 U { “E , 0 } S5=S(s(0. T), CONS(6s))= S1 U { B v “E ) S6=SWS(0, T), CONS(G1)), CONS(6s)I =S2 U { B v “E } S7=S(S(S(0, T). CONS(G2)), CONS(6s)) =s3 U{Bv-E} The remaining clauses of Ss=SIS4,CONS(6s)) are not computed since S contains the empty clause. Now we can use these set: of clauses to answer any query. To answer a query Q we evaluate S(Si, (“Q)) with i increasing in order to take advantage of the inclusion of the Si’s. For example, a default proof of D is given by: scsl. {-D}>={ C v D , E , “D } S(S2. ( -D})=S(S1. (“D}) u t “C v “E , D v “E , “E , 0 } We have proposed a theorem-prover for a subset of default logic. The default theories of this subset only contains free-defaults and all the consequents and axioms can be transformed in predefinite variable clause form. Our theorem-prover based upon saturation by sets Is complete and always terminates. We are about implementing It in PROLOG. The main advantage of the prover is to deal efficiently with querying and default updating, once the compilation phase has been realised. However, the prover is not adapted to axiom updating. This provides a direction for further investigations as extending the subset of formulae the theorem-prover is able to deal with. REFERENCES [Al 801 Special issue on non-monotonic logic Artificial Intelligence. vol. 13, 1980 [Bes 831 Besnard. P. A proof-procedure for a non-monotonic logic Technical Note 198, University of Rennes I, 1983 [ Bos 811 Bossu, G. & Siegel, P. La saturation au secours de la non-monotonie Thesis, University of Aix-Marseille II, 1981 [Kow 691 Kowalski, R. & Hayes, P. J. Semantic trees in automatic theorem-proving Machine Intelligence 4, pp. 87-101, 1969 [ Rei 803 Reiter, R. A logic for default reasoning Artificial Intelligence vol. 13, pp. 81-132. 1980 [Rei 811 Reiter, R. & Criscuolo. G. On interacting defaults Proc. IJCAI-81 pp. 270-276, Vancouver, 1981 30
1983
54
249
A P3ODUCTIOIJ SYST’El; FOR LEt”R!JI:IG PLA;JS F’XOir AiI I:::l-‘ZRT [S ] D Paul Denjal,iin and ;lalcol~ C !Iarrizo:k Courant Institute of i:athe;;atical Sciences IJew York Uiliversi ty 251 Liercer Street IJ e II Y 0 r k , IJY 10012 This paper descri’bes a d e t 110 d 1’ (j ,.l c 0 n s t r u c t i n <; expert syj3 teds i 11 17 11 i C ii control ialor~.iation i3 auto~,latically built 1' r 0 .I1 t 11 e act ions Of hn e;:)crt trainer. This control iuforL3ation c 0 11 s i s ‘c 9 Of se’;uc ncinc; a lid ;oal infor:,lation which is e,:tracl;ed fro,.1 tile traiiier b;* a 1 plannin; e;3::>ertt, a n d ;zneral ized by a ‘;eneraliaation ex>er t 1 . A set Of e,:t;erlsii>r;s t0 the Oi’Sj 2ystet2 is descrijed ;Ihic17 fsciiitates t ii e i 11 p 1 e A e i 1 t 3 t i 0 13 Of 'GiliS Lc ij 9 r 0 Z Cl1 w i t ~12 opsj. These e ;; t e n s i 0 ri s p e r 1.1 i t tile Use 0 i' 1.: e t a productions CO er'fcct t h e c 0 n 2'1 i c T; resolution proccaure; t 11 e zo;i'crol i n f 0 r.,~ d t i 0 n is expressed as a set 01' sue:1 xeta productionn. Prcl ixiilary c;;i;erilnentL; :ri ‘CL the sy3te:i are described. 1 . 0 ::::OI:L3J)C;z .;?SiJR~;:SI]ljTArrIO:I FOR r: 1: p r: ? T .,A. Sy;;TzT.!S 0 :i 0 Of the 0 U t S t a 11 d i U ; prOO1ei.is i 11 c 0 113 t 'r u c t i '1 I .; t e~~-,er'L I pru;;ra..ls iu that of flOGi to e:;i;ract tne rleces3ary i 2 r’ 0 r .,I a t i 0 i’i P I' 0 ;.1 a ~U~ILAI expert. Followiu.; :;crrr~lsi:i [;I, we vielr this i;lfor:.lati.on as i!avi,li; t JO co,.1;3onellc~, 2. 1 0 2 i c c 0 .): ;I 0 II e L-I t t? 2 d 2 C 0 11C r 0 1 C 0i.l ;I 0 L: G I1 t ; P0r c 0 1.1 9 'U t c r dcsi;:;, i-0 r CI:;l‘:pl e , t:le first would i;:clucic "iho d;-ioi:3 a:id tilcfore..;s of Soolcari al;e’ura and I L t IId specifications of the available chijz, ~7~1 i 12 t ii e seconc wouid iiiclude the desi ‘11 ” techiiiques co:ltained i:1 a texooo~~ 0 n io,;ical ilezi ,n. :!Oi;e that this is ri difPerer,t zate;orication f r 0 I . i; 11 2 ..I 0 r e crsdi tiol:sl declarative/~~r~oce~ur31 distinction , 3 i in c e the logic eo..~pon~nt i; ~dst vitixed a.3 a declarative specification 0 1’ ,roaedural i ‘1 f 0 r 1.1 a t i 0 :i . I i? ;;enerzl, it 2 ,z & ,.‘ s _ teat the :)robleil oi deal iii,; ;J i i; 1, lo,,ical i il f 0 r .,I c: t i 0 n is likely to 5e xore tractable c ha :: that 0 I’ deal i.;; GI i t i; control i n f 0 r 2 a t i o 9 . AccordiLl,ly, ii1 Our I.' 0 'f .; xc h av c c 0 11 e 3 at r ~1 t e d On t .lC iJ 1' 0 b 1 C ._. Of e::tracti.a; Iro..l t:le ilu,.laii t;7s? essetltizl C 9 id [J 0 Al 62 11 i; S Of his control 2;:$2rtise. In previous :~ori: oii tnis L2rotilc.. [ 'I , 8 1 , ___----__--------_--____________________~ [ ii ] Tili.2 ;ror!: ;.r; 3 .3 U 1; ~1 0 .P t C d by 0;;:7 contract iiu..;rjcr IIc)OOl Q-73-C-O!jp( 1. 22 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. xi’iii the Colli’l ict set, is se,it i;o :‘::;::X;::: hi; t 11 C 1 e i’ t - 11 a n d side 0 f a ,lC il ‘.5 t.2 h,roducLion. Tile riJlit-i;x-:nti side is c .i 2 I! 2 li 1-y '2 0 2s t r u c i; e d ,0al ?$roduczi22. 2 . 1 Construction : IId G e ri ~2 r h 1 i z a t i 0 n 0 f Flails 2.? J c;::.: 3:: 23 set for a particular instantiation Of a ,~r0duction. Fur tiler,zorc, 3 rule c2n cause LLII instantiation or” an action production to fire. Th c dCCtl2his:.l to realize this is 23 follorrs: 1. I!;; is 23suxed to contain ;oais of t h e f 0 1 1 0 iJ i 12 0 f 0 r i.1 rrtlich specifjr possible e 1 e ‘. I e 11 t c' " OP t he action SjiStei.l’S conflict set: (EzzC[JTC ^pna!le --- ^pid -- *OVlliil <ii> ^f’dther --- ^status --- ^brothcr ---> tol;e thcr I7iLi; 0:le or lliOre pdraxetcrs: (~;:~C~JT~-PA:{~~i~ ^&ass --- ^2tt --- ^viilce --- *oxni.d <n> > 2. After arlj chaal;e to tLie LiSl: d 0 :,I a i n C 1 '2 ~1 i? i? t 9 i ~1 IJII, OPS5 auto,,aticail>- 1 inks T!Il C?ic?..itintS Of the a b 0 v c for:.1 i!it.l i;istal?tiatioi!s in t3c conflict set w il i c h have class-attribute-value triples corresp0ndi.A: LO t h e i.ri~taz4ti2tion iI1for~:iatioil irl the Joal’s IJar~.Ictcrs. 3. k n 0 P s 5 il ia e ‘r f Li n c c 1 0 n fire -i~l~OiUCtion is provided t112 t Causes a >jrtLcular instantiation i,l the conflict set t0 fire. a. TiIC s;rste,i! distin;uishes a30 ii; four ir i n d Y L Of productions: executive productions, 1.1 c t a productions, soal productions, and action prouuctions ; UC have adopted the convention that the il 2 de S Of all action protiuctions Se;in I,; i t I‘ ‘ ‘a’, t h e nailies of all ,;oal productions beL;in rrith 1 *’ , the nai.ies of all xeta productions beEin with , I iit 2‘ld t h e n Zlil c: s Oi dll e x e c u t i v i: Lo i’ i, r; 7J ~2 i; 2 I) , j 2 .L:: oi ;, ::i’i:l 1 ;; I . The co~ii’i ict-reboluiion process i7 as been .1odifieci to ChOOSC executive $I A" 0 (1 u c t i 0 11s OVkY Lie t a prouucLions . A II executive groduc;ion Ir>ich is so,.;eti7iii; 1 il:e : ( iJ :;I (;:::;1:CijTE *pna,.le <p> -2i.d <II>) -a > (czil fire-protiuctlon <A>) ) is used to f i iq e action productian3. OPS5’S n 0 r ‘.I d 1 co;‘lflict -resolution i:rocess is used to C il C, 0 s U a 1.10 11 2 t ii e executive productioas. iTGr.,;(;l iy , ~:3cfi 2 protiuc:iori is added to OPSI;, it is i10 t possiSle for it to enter t &I e co::1’1 ict a 2 ‘i ii:i;cdiatcl;l, 2s t h C p r 0 C; u c t i 0 n - ;.: c1 t c 17 i LIZ n e 'cr.70 r I; i s L5l 0 ci i r‘ i e d 5 j- ehan,cs to 17 0 r k i ii . 3 i.1 e :.I 0 I' Lr , 110 t to I~roductioil 1.2 2 A 0 r ;’ . I!oxevtir, our vdrsioii h ti :; be c il a 1 c c r c :i t 0 zliorr a ner; procuction to be iil$tar ltiated iIlli,lediately. 3. 1 1ILf;il tAlis capability, tile tasi;s lic’r c ztruci;ured i n t 11 c? r”oiiowiri,j ;.ianner ill order l;o take advalltaie of the 2cccss t0 the confl ic t set . Go&i trees ia \JOr'.<i~i~ i.lcL1ory contr^ii; trro tjges of aodes: tl;osc rrrlicli represeni; abstract goals, W ii i C 11 &r-c- treated i 11 2 pur;:ly s;r;ltactic fashion; a rid Cl 0 d c s \rdich express the goa: of i’iriii, 2 pkrticuldr groductioil t0 2 f f e (J t t i1 e do ,lai.i. ‘I’:1 c various ;oal t;l@cS arc reprrti OOf.X,~l;~~ ii] ‘i i-, 2 TI :,.I by OPS5 zlel~l(:nts ol” tile l'oil0wi.n; for,.i: (COAL <type> <id> <son> <brother> <f.‘atlier>) (GOAL-rAEA <type > <value> <id>) ~31 c r e t 11 c Goal t :’ iJ (: cali be: .::;‘.;Cu’rC, II~PZAT, SSO(uence), A:JD, or 02. Th c t;rpC-vdlue pairs spc.cii"; Lilfor-12tiO;l :7.,icd the particuiir t;r;~c of’ ,oal needs, SLlCil 2s t 11 c t c r 1.1 i 11 ;I t i 0 n ir:for;lation f 0 r ;: E P ;: A ‘1 ;;OillS, or which iAstaatiatiou to fire for 2 1,: 2 c IJ T ;; goals. T h e 3 0 r1 , brother, a n (1 father vaiues arc intc;er.r; \J,:ic:l 2Oitlt tb t il e ~~021s which .zre ‘oe 1 oer , 2dj2CCllt CO, 2nd above a given L;oal node iii the tree. The action productions .I a ii i p u 1 ;1 t e t Ii 6 clo;lain, a11ci cannot access the ~;oal trEe. 1n t:~c l.iodi.i’ied I)PS5, an actiorl pro.-!uctio:1 i -: ” no t ailo:Jed to fire 52 itself, but instead :,lust be activated ;O;r the presence i il tile ;oal tree of ilil 3XZ:CU'TZ ,;oal VliiCh requests tile firiil;. The action? productions are d co~piete set 0 P lox-level dOi.laill .,lanipulctioils. Th c dol,xi?,i h-tie pende nt productions anii t ii0 CoiltrOl 2rotiuctions that .l~ni.~~ul~te '; he ,021 tree are i 13 p 1 e 1‘1 e n t e d a s ,0&l productions a ,lii 13-c? ta productions, ‘r e s 3 c c t i v c 1 ‘.- .ieta ana . * ” l 2O Zl t ) rouucL 1or~ s 1.1 a y acce ss t i;c ;oal t 1” e c‘ 0 1” the do-.lai:i ,. . i n r 0 r In a ‘c 1 0 n , ;1 rj (j 2130 the confi ict set. TilU3, ,021s ~211 be set dependi;i;; on ublich actions are fireablc and which C;oais see‘1 attainable or unattai~~abie. no;.iai[1-itide~,eI?der;t rules that co;itain in~or~,:ation relevant to tl-ic structuring of t 11 c ~oai tree arc i!,1pie+.icnted as executive productions. This infor,;iation :.I a y be il i ;li-level, alon; the 1 i n e 3 Of ijilensky [II], or Lierely housekeepin,; inforklation. An exa..lple of tile forder is Cctectii;; ,021 OV~rld$, >Ji,iCh could be represented i:i OPS5 as: (P x25 ( 11 2 ‘-’ C U T E Ai3t12i.!e <p > (ELiCUTE ^pna:.ie *ownid <id I>) <P> “orrnid I:<> <idI>]) -- (zske GOAL * t;rpe bri:ig-into-cs ^pnal,ie <p >) ) This production detects the presence of t Id 0 di.:‘ferent iilSt2ilcCs of t h c E;I~~TJ"j~ I 2 goal, WJJiCh are both atte..lptiu, to execute production <I-)>, but are differe;lt r,y virtue of their differin; 'o:JuiJ' r'ielris. I33tantiGtion inforxation is also ilicludeti in EXl:CUTE seals, but is not shoo1rn r'or Sibiplicity and space. An exaisple of a ilOLlSel;eei;irl~ -. e :- e c u 'L i -f a b production is: (p x4 (OR A status active "ownid <.:>) (<<E:;,:CUTI GOAL AiID O?,>> -status done -father <::>) -- > (xodii'y 1 -status done) ) Iteration i:l the doal i;ree is handled 'it jr R ,: P ;:: A T ilObeS. A I?EPEAT node requires tile Loal i 1.11.1 e d i a t e 1 y 'oC?lOlT it to be repeatetily satisfied until 301.1e conditio~l is 13e t . rfk,is is i:.Jple;,Jented by deletin' t kJ e subtree uilder the ~0.21 under t11: 3T:PZAT, dtld rebuildii:; it as 10~1~ as t h e COi'JditiOn is (lot ..lCt. Th c deletion of ti:e subtrce is done '0 y' (2 x10 (delete-t:Jis ^id <idI>) (GOAL ^ow.lid <idl> njOn <s> ^brotiJcr <S>> (re,.:ove 2) (.,!;ke deicte-:iiis ^id <s>) ( dci.: e delete-tilis -id <is>) ) ( p xl 1 (delete-t$Jis -id nil) -- > (rxe,.love 1) > A ;?3?ZAT 110ue is a noue udicii bpdcifizs that a ilarticultir d;eqUeilC?e of ZiCtiOilS, or ;L particular ;oal, s:1ou1 cl be r e I) 0 *. '6 c fJ 1 r 1 UC+ satisfied urltil 2 ,;ivan condition is ..Jct.d 25 ;;otc that tilis production? has a nul i ri Jh t- ha 11C.l sicie. It SC?rVCS ,.;erel;! to 2 c n 3 c.. _ coLlcitions. If the trainer states a :: 0 i’l d i t i 0 Yl for l,Iiich no sensin; production e ;; -i 3 ‘c 2 3 L 1; ;; a31:5 the trainer to d e I’ i n 5 tile p;operty by crcatiat a n e 31 sensirl, production. 4.0 AlI 21:A:iPLK: An e::a...ple \;a~ constructed to ;et so:‘” A- ;*e(:l f’0 i’ t ii C 1~ i n d OP ,031 and 3 e t 2 productions M pi i C ‘h would be u Se -r' u 1 i ii i)ractice. r, /I ? 11e e ;: a :.1;, 1 e USed was a nondetcrkinistic productioil s y s t ers 11 h i c h solved a ji ,;saw puzz Lo sj.,!lilIIr 'CO thCt i!l L 5 1 . ijhen lefi; to ru~l 51 i t 13 il 0 L;oal or _,_ e t a ;uidafice, the S~3tei.i r2peatS the SCiIle 3e’:iuencc oi’ productions, and a c c 0 1.1 p 1 i s 11 e s dot n i ii;. I-I 0 w e v e r , vile n goal productions op the for13 described above: were used to i i i p 1 CZ i.1 C i1 t a 3 iI.;pl e stratezj-, the systei.1 proccedn directly to the solution. These ;oal firoduccions 14 e r e 20 t seneratcd by PJ~EI; ai.ld G::II~YI, but added directly 3ji the trainer. The si;rate,gy specified b:; t h e trainer ‘*ICI. s Si;.iply t0 fi:i(; a piecit with an cdGe a21 a t c 11 i n <; one already iI? the puzzle, t h e I1 pi ace tnis piece i 11 tile puzzle. This 3 d v li c c war; transPor,,!cd into a ;oal tree ii1 scvcrnl S;‘;CPS. First 9 t ne traiiler stated that this <;oal could ‘0 c a c c o .A p 1 i 3 h e d b ;r first fii?diniL; tke piece k7 i t b t h c :n. a t c h i n ; ed;e, a i-l d then placirl; it iilt0 the puZZle. The systc.,: constructed a ;:oal production t0 create a 32si node; a SEC) node specifics 2 sequence of subgoals necessary to solve t 11 2 3 a r e i-1 t 2 0 a 1 . 1:e;:t , t :1e trainer defined what i,r j: s 1.1 e a ii t !, .: ed,e. d fiilciifiz a piece with a matchin, Th c statexent of this def iilition ‘i,rz s that. I* t Le r e i.i U S t be a piece i 11 t 11 0 puzzle 111th a certain edge value, and 2 piece i n t ;I 2 h e a p of >icces not in the p u z L .le aritl~ the S3i.le edge vaiue.” This defiriition H a s turned into a 11 ;;cti.on p r o d u c t i o n t II a ‘c w o u 1 d !!;i a t c h s u c h a piece. This action production had no right-hand side , a I1 i: thus served as a I1 s e 11 s i 11 i, Es production. This type of proiluction is u 3 c ci b jr tile syste.,; to ei:lb 0 d :i ciefini tions aad conditioils. Then the secoilCl subgoal Of [JUttiilg the piece in tiie guzzle x123 clarified by the ‘ trainer, Xi,0 decizrcd that this consisted of a spccif ic sequence of actions chaser: fr'O:,l t iJe Set Or" action productions. :I i t ki I; ?J i c! u i f~ f 0 r A 2 t i 0 n , the s> stet,: proceeded to follorr the ogti,:al path to solution of the puzzle. Other strate;ies, such as pUttii1.i all the pieces of one color in i;tie puz:z ic at a ti:ie, could also be ii.,plei.leIit~d. Ilith ttiesa ruie3, the systeki is able to solve n c il trainil?l;. 1 . 2. 3. G. 9. IO. 11. 12 . pu,3eies xi. thout additional F 0 r \J y , Charles L, “OPS3 Useri, 9 i’z i1ual l’ 1 , Report . . CiIU-CS-81-135, Czrliel~ie-::ellon University, July 19Sl. u G e 0 r c c 1’ f c0&01 , il 9, " A Fra:,lc;lork for iti Production S;rsteI,ls”, Iroc. IJCAI-6, 1979. Kolial ski , Xobe I9 rtA, . 9 i (; 0 ‘r i t ii _ 1 = Logic c Control s1 I CA Gil, July 1979. Laligl e-f J , Pat, P,rhdshair, Gary L, a 11 d S i L.ro n , Yerbert A, '3ACO;:.5: T Ii e Discovery of Corlservation La:~s, 9 Proc. IJC;.I-6, 1979. iiitchell, TOA i:, Ufzjoff, Paul ‘;:, Hudel, E e r 1-J 2 r d , anti 3cner j i, R a n a n f I9 L c a r n i n c, P r 0 t, 1 e ii - .2 0 1 V i II b IIeuristics ThroL,h Practice, Is Proc. IJCAI-6, 1975. 1: e 0 0 1-l , Rene, ‘sUsinL c. ;;atcher to i&l al: c L I1 Z x p e r t Coflsul tation SySteLi 2ehavc: Iil~elli;ently, 9f Proc. AAAJ Confereiicc, 1930. Stolfo I Salvatore J, and Iiarrison, ;;aico1,,: c, “AUtO.,:a-LiC Discovery of :ici;ristics for :Ionde ter,linistic Pro~~raLis9R " i , Proc. IJCAI-5, 1379. Stolf'o, Salvatore J, and Harrison, .:alcol:.. C, ‘9Auto.latic Discovery 0 f Seuristics for :JoIldeter,.liliistic P r 0 ;r a ,.I 3 I9 (full ver3ion), Techtiica2. 2eport 7, Courant Znstitute, 1979. Stolr’o, Salvztore J, rpAuto:.:atic Discover-jr Of Beuristics for 1Jon-deterl,:inistic Pro;ra,a; fro..; Sd~.lple Zxecution Tracesi P;iD Thesis, IJei! York University, 1679. Yhitehill, Stephen E , l’Self-Correctin, Generalization,” Proc.AAAI Coafcrence, 1920. Xilensky, Robert, 99iIeta-Plannin;: R e 1-l r e 3 c i? t i t? ; a n d U s i ;i ; ‘r:nowledAe about P 1 a il 11 i 11 2 i n Proble,3:-Solving and “atural II Language Understandin;, I9 Cognitive Science, 5, 1961. Tlinston, Patrick II 9 “Learnin; Structural Descriptions 1” r OiJ ;=:;a,,ip1 es ” , i li ” T h c Psycholoc;jr Of Co,.;pu ter Visioii9’, edi ted b;; iliilsto;l, ;lcGratr-!Iill, 1975. 26
1983
55
250
LEARNING BY RE-EXPRESSING CONCEPTS FOR EFFICIENT RECOGNITION * Richard M. Keller Department of Computer Science Rutgers University New Brunswick, NJ 08903 Abstract Much attention in the field of machine learning has been directed at the problem of inferring concept descriptions from examples. But in many learning situations, we are initrally presented with a fully-formed concept description, and our goal IS instead to re-express that description with some particular task in mind. In this paper, we specifjcally consider the task of recognizing concept instances efficiently. We describe how concepts that are accurate, though computationally inefficient for use in recognizing instances, can be re-expressed in an efficient form through a process we call concept operationalization Various techniques for concept operationalization are illustrated in the context of the LEX learning system. I Introduction and Motivation Historically, machine learning research has focused on the task of inferrrng concept descriptions based on a set of examples, and it is only recently that other types of learning have begun to come under investigation [2]. In this paper, we focus on the process of concept reformulation rather than concept acquisition. We assume that a learning system acquires concepts in some unspecified manner, whether by inductive or deductive means or by being told. For a variety of reasons, it may be necessary to re- express an acquired concept in different terms. In particular, the concept may be expressed tn a manner that is computationally inefficient for the purpose of recognizing examples. In such cases, the learning system is faced with the task of reformulating the concept so that recognition can occur more swiftly. We call this task concept operationalization, in the spirit of recent, related work by Mostow CS] and Hayes-Roth [3]. Consider the problem of using a concept description to efficiently recognize examples of arch structures. Winston’s pioneering concept learning system [ 1 I] succeeded both in formulating an arch concept description and In subsequently using that description to recognize arch instances. An arch Instance was given in terms of various visual and structural features of its component parts (e.g. shape, orientation, and relative placement) The program inductively inferred a structural description of the arch concept, similar to the one shown below, based on a set of tralnrng examples: Structural Arch Concept An arch is a structure which: (ij is composed of 3 objects, 2 of which must be bricks (ii) the bricks must be standing adjacent yet not touching (iiilthe other object must be lying horizontally, supported by the bricks. *Work supported by NIH Grant #RR-64309. and a RWWrs Unwrsltl Graduate Fellowship. Using this description, arch recognition was quite efficient. The program simply had to match the structural features of a prospective arch instance against the structural description. Now imagine that Instead of giving his system a set of training examples to be used in concept formation, Winston short-circuited the process and inittally provided his system with a complete, although non-structural description of the arch concept. For example, he might have provided a functional description of an arch: Functional Arch Concept An arch is a structure which: 0) spans an opening, and (ii) supports weight from above, or to either side of the opening. Although the functional description is just as valid as the structural one, recognition is no longer a simple matter. It 1s not clear how to match the structural features of an instance against a functional definition without some intermediary processing In particular, either 0) the instance must be altered to include functional features as well as structural features a priori, or (ii) the functional features must be computationally derived each time a new structural instance is processed, or (iii) the functional definition must be re-expressed permanently in structural terms Of these options, (iii) represents the most practical long-term solution to the recognition problem. In the context of the arch example, the structural re-expression of the functional definition involves the use of physics knowledge, as well as other domain-independent knowledge, to relate form and function**. Once the arch description has been re- expressed in a manner suitable for efficient recognition we will consider it to be an operational concept description In the balance of this paper, the task of concept operationalization is more precisely defined. Section II describes how the notion of concept operationallzation initia!ly arose in the context of our recent experiences with the LEX learning system [5]. Section III follows with a more formal specification of the concept operationallzation task Various techniques for dealing with this task are then introduced and illustrated in Section IV. Section V concludes with some comments about related research and some issues that must be addressed prior to a full-scale implementation of the proposed techniques. II Concept Operationalization and Problem Solving To explore further the notion of concept operationalization, we base our dlscusslon on the LEX -- wComcldentally, since the mltial wrlttng of this paper it nas come tc my attention that Wlnston is pursuing the relatlonshlp between form and function using tecnmques relared to those described here [ 12 1 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. learning system. While the framework proposed in this paper has not been incorporated into LEX, it arises out of our recent experience with this system, and our attempt generalize upon its methods. LEX is a program which learns to improve Its problem solving performance in integral calculus. Problems are solved by the system using a least-cost-first forward state space search method. The starting state in the search contains a well-formed numeric expression with an integral sign. LEX’s task is to solve the integral by applying a sequence of operators that will transform the starting state into one containing no integral sign. A set of approxrmately 50 calculus and arithmetic simplification operators is initially given to the system. Each operator specifies (i) an operation to be performed, and (ii) a set of numeric expressions to which the operator may legally be applied. Here are three sample operators: OP 1: jsin(x)dx ---) cos(x) OP2: Sk l f(x)dx - k-ff(x)dx OP3 Integration by Parts: j.f l(x)*f2’(x)dx * f l(x)*f2(x) - +ff2(x)*f l’(x)dx where k:constant, f(x):function, and f’(x):derlvative of f(x) As LEX solves problems, it learns to prune Its search tree by refusing to apply operators in those situations where experience has shown an application to be “nonproductive”, although legal. For example, it is often legal to apply ‘integration by parts’ (whenever the integral contains a product), but it is much less frequent that the application will lead to a solution. The criterion to use in deciding whether an operator application is to be considered productive or nonproductive is given to the system initially as the Minimum Cost Criterion shown below: Minimum Cost Criterion: Any operator application that extends the search tree in the direction of the minimum cost*w solution is considered a productive operator application. LEX’s learning task is to acquire the ability to efficiently distinguish between productive and nonproductive operator applications By definition, this ability should improve the system’s problem solving behavior. Note that with prior knowledge of the Minimum Cost Criterion, LEX begins by knowing in principle how to distinguish between productive and nonproductive operator applications: simply expand the search tree until a minimum cost solution is found and then classify the operator applications accordingly. However, this method is grossly self -defeating, since a process that simply deliberates about which operator to apply next cannot be granted the luxury of conducting an exhaustive search for the minimum cost solution’ Although direct use of the Minimum Cost Criterion to recognize productive operator applications is prohibitive, the Criterion is used by LEX in other significant ways. The most recent version of LEX**** [6] employs both a procedural and a declarative representation of the Criterion in the processing of Instances. Initially, the procedural Criterion (called the CRITIC) classifies an individual training instance (i.e. an operator application) as positive or negative (productive or nonproductive). If the instance is positive, the declarative Criterion is then used to analyze which features of the instance were specifically relevant to the CRITIC’s positive classificatory decision Once the relevant features have been identified, LEX generalizes the positive instance by discarding irrelevant features The generalized, rather than the original, positive instance IS then fed to LEX’s inductive learnlng mechanism*w*? This mechanism then constructs an efficient method for recognizing positive instances based on syntactic patterns in the instances Let us describe a subset of the RL that IS of particular interest. The efficient recognizer language (ERL) contains only those RL terms that describe either (i) features encoded directly in instances, or (ii) features efficiently computable from encoded features. Any RL term that references other types of features is part of the IRL (inefficient recognizer language). Finally, we will define a recognizer expressed solely In terms of the ERL to be an operational recognizer******. Notice that for the arch example, the RL IS a language containing both functional and structural terms, while the ERL is restricted to structural terms and the IRL IS restricted to functional terms Thus, the structural arch recognize: is operatlonal. while the functlonal arch recognizer is not We are now In a position to define our task. Given: Concept Operationalization Task 1. Recognizer language RL, 2. Efficient recognizer language ERL where ERL o RL, 3. Inefficient recognizer language IRL where IRL=RL-EEL, 4. Recognrzer R, expressed in RL, containing IRL terms Find An operational recognizer, ER, expressed In the ERL, which recognizes the same instances as R Mx*+Slnce the generalized instance represents a set of posltlve instances. the LEA analysis technique has the effect of squeezing a number of trarnrng Instances out of a single one. LEX does not currently generalrze negative instances In thts manner, although the corresponding analysrs IS feastble in prlnctple. **Cost IS measured by cpu ttme expended to reach tne goal state ***/A I I components of this versron have been Implemented on several examples, but have not been completely Interfaced and tested Mm*+ln formulatrng thus problem, we make the simplrfying assumption that a recognrzer is either operatronal or nonoperational The operationallty of a recognizer IS more properly consrdered to be a matter of degree Informally, the task here is to move a recognizer into the ERL so that It can be used for efficient recognition Since the set of positive instances recognized by R is never explicitly given, a major difficulty with this task lies in proving that a candidate ERL recognizer will identify the same set of positive instances as R. Our approach to the task addresses this problem by applying a series of transformations to R that have well-defined effects on the set of instances recognized by R. Another problem with this task concerns what to do when there exists no ERL recognizer that identifies all of the positive instances There are primarily two options available in this case: (i) settle for an ERL recognizer that identifies a subset of the positive instances, or (ii) expand the ERL until an appropriate recognizer is included in the ERL. The second option involves finding an appropriate IRL term to incorporate into the ERL, and then developing an efficient method of computing the feature described by the IRL term. This approach has been pursued by Utgoff [lo]. We are now in a position to illustrate how we view LEX’s learning task in terms of operationalization. Figure Ill- 1 presents LEX’s non-operational Minimum cost Criterion, phrased in terms of a recognizer in predicate calculus. The POS-INST-0 predicate recognizes only productive operator application instances. The instance data structures are pairs of the form (OP,STATE), where OP is to be applied to the integral calculus expresston contained in STATE. In order for (OP,STATE) to be a posltlve instance, POS-INST-0 specifies that STATE must contain a non-solved integral expression, and that the application of OP to STATE must produce a new state that lies on the path to a solved expression. Furthermore, no other path should lead to a less costly solution. Note that POS-INST-0 contains reference to features (e.g. solvability and cost) which are neither directly encoded nor easily derivable from the instance data structure. For example, to determine whether STATE is SOLVABLE, It is necessary to complete an exhaustive search of the subtree beneath STATE. Furthermore, to compute the cost of STATE requires a complete record of all operators applied to reach STATE from the root of the search tree. Due to these lnef f iciencies, POS-INST-0 is non-operational and terms such as SOLVABLE and MORE-COSTLY-STATE are part of LEX’s IRL. On the other hand, consider the followlng example recognizer which is expressed in LEX’s ERL POS-INST-A(OP3,state)c MATCH(J-trlg(xj*poly(x)dx, Expression(state1) ERL terms have been restricted to apply solely to features of integral calculus expressions If specific features are present in an expression, then the features are said to MATCH the expresslon A feature MATCH IS efflclently computable from the instance data structure with the aid of a grammar for calculus expresslons C 101 For example, if we want to evaluate POS-INST-A(OP3, Ssin(x)* 3x*dx), we invoke MATCH to determine whether sin(x) is a trigonometric function and 3x2 is a polynomial function. In the next section will illustrate how POS-INST-0 can be transformed into a more efficient ERL recognizer. iV Operationalization Techniques Each operatlonakzation technique dlscussed in this section specifies a transformation that can be applied to a given recognizer in order to produce a new, and hopefully more efficient recognizer. The set of techniques described IS intended to be general, although not comprehenslve. Consult [8] for Mostow’s thorough treatment of a broader spectrum of operationalization techniques A. General description of techniques In Table IV- 1 we describe and characterize a number of operatlonalization techniques definitional expansion, enumeration, redundancy elimination, constraint propagation, disjunct selection, instantiation, conjunct addition and conjunct deletion Each technique consists of a single replacement rule that can be used to transform a predicate calculus subexpression found in a recognizer. For example, we can use Definitional Expansion- 1 to transform Foo(x)hBaz(x) into Bar(x)hBaz(x) if we know that Foo(x)*Bar(x). Starting with a non-operational recognizer, the replacement rules are sequentially applied with the goal of transforming IRL terms into ERL terms This process will be illustrated in the next subsection. Figure III -1: IRL Recognizer for productive operator applications POS-INST-O(op,state)++ -GOAL(state) A APPLICABLE(op,state) A SOLVABLE(Successor-state(op,state)) A [(Votheropeoperators 1 otheropfop) -APPLICABLE(otherop,state) v -SOLVABLE(Successor-state(otherop,state)I v MORE-COSTLY-STATE(Goal-state-reachable(otherop,state), Goal-state-reachable(op,state))l SOLVABLE(state)++ GOAL(state) v 3op [APPLICABLE(op,state) A SOLVABLE(Successor-state(op,state)Il Meaning of PREDICATES (uppercase) and Functions (capitalized): q POS-INST-O(op,state). the application of op to state is productive II SOLVABLE(state): there is a path leading from state to a goal state m APPLICABLE(op,state): legal to apply op to state q GOAL(state): state is a goal state m MORE-COSTLY-STATE(state l,state2) cost to state 1 exceeds cost to state2 m Successor-state(op,state). returns state resulting from application of op to state IJ Goal-state-reachable(op,state). returns goal state at the end of the path starting with the application of op to state 184 1 TECHNIQUE 1 REPLACEMENT RULE CONCEPT-PRESERVING TRANSFORMS 1 1 Definitional Expansion- 1 1 Replace A wdh B If A - B 1 Enumeration Replace [ Vx ] f(x) with f(A)Af(B)A TRedundancy Elimination ) Replace (ANBAA)) with (BAA) 1 Constraint Propagation Replace c(X.f(Y)) with c(Z.YL where Z=f -1 (X) CONCEPT-SPECIALIZING TRANSFORMS Def lnitional Expansion-2 Conlunct Addition Disjunct Selection Replace A with B ff B + A Replace (AAB) with (AABAC) Replace (AWVC) with B I Instantiation 1 Replace 3x 1 f(x) with f(A) 1 CONCEPT-GENERALIZING TRANSFORMS 1 Conjunct Deletion 1 Replace (AABAC) with (AAC) 1 Definitional Expansion-3 1 Replace A wltr‘ B If A ---$ B Table IV-l: OPERATIONALIZATION TECHNIQUES It is useful to view the operationalization process in terms of a heuristic search through a space of concepts, where states in the space are recognizers and operators are operationalizing transforms. The starting state represents the initial IRL recognizer. The goal states contain ERL recognizers that identify the same posittve instances as the Initial IRL recognlzer. (Failing this, the goal states are those that identify the largest number of positive instances.) There are various pieces of knowledge that can be used to guide this search. Broadly, these include: @ Kno;Nledge about the effect of a transform on the set of instances recognized as positive. Any transform which modifies this set should be avoided. Ex: If only concept-preserving transforms are used during concept operationalization, the set of instances recognized by the final ERL recognizer is guaranteed to be identical to the set recognized by the initial IRL recognizer. On the other hand, concept-specializing transforms reduce the set recognized as positive, while concept-generalizing transforms enlarge the set so that some negative instances will be falsely included as positive. Concept-generalizing transforms are therefore the most dangerous type to apply during concept operationalization @ Definitional knowledge about which IRL terms have ERL expansions. This knowledge can be used in means- ends guidance. Ex: A definition that expresses GOAL in terms of MATCH translates between the IRL and the ERL. Re- expressing parts of the original IRL recognizer In terms of GOAL is therefore a useful subtask. * Examples of transformation sequences that have resulted In useful ERL recognizers in the past These can be used to focus the search. Ex: To provide guidance in the selection and expansion of clauses in an IRL recognizer, it may be possible to use a previously-formulated transformation sequence as a a kind of macro-operator. In this way, the construction of a new sequence could be guided by previous experience. @ Domain-specific knowledge that can be used to prevent the over-specialization of a recognizer. Ex: Any recognizer containing the following instantiated def lnitional expansion of SOLVABLE (defined in Figure Ill- 1) IS legal, but identifies no instances at all: GOAL(Successor-state(OP3,Successor-statelOP 1 ,statel)I 8 B. We know that the use of this expansion causes over- specialization because our knowledge about Integration operators informs us that OP3 cannot actually be applied to any state produced by OPl Knowledge about the goals and constraints of the learning system This type of knowledge can serve to justify the use of a concept-altering transform Ex: If the predicate Red(x) is not In our ERL, It may be necessary to perform Conjunct Deletion to remove it from the expression Red(x)ARound(x)AHeavy(x) To justify the use of this concept-generalizing transform, it may be helpful to know that i) the goal of the system in question involves (for example) learning how to make efficient use of mechanical equipment, and that ii) color does not generally effect mechanical properties Operationalization techniques applied to LEX To more fully illustrate the techniques introduced in Section IV.A, we would like to illustrate operatlonalizatlon in the context of a specific hand-generated example In particular, consider the task of operatlonallzing LEX’s IRL recognlzer for productive operator applications (POS- INST-0 In Figure Ill- 1). One possible transformation sequence IS depicted in Figure IV-2 Figure W-2: dperationalization of POS-INST-0 POS-INST-0 (from Figure Ill- 1) 1 4 Via Disjunct Selection on -APPLICABLE 1 ‘POS-INST- 1 (op,state)+ -GOAL(state) A APPLICABLE(op,state) A SOLVABLE(Successor-state(op.state)I A [(Votherop=operators 1 otherop#op) -APPLICABLE(oth&op,state)] J- Via Definitional Expansion-l on SOLVABLE I POS-INST-2(op,state+ -GOAL(state) A APPLICABiE(op,state) A [GOALtSuccessor-state(op,state)J v 3opx (APPLICABLE(opx, Successor-state(op,state)) A SOLVABLE(Successor-stateiopx, [(Votherop=operators Successor-stat&op,state)III 1 otherop#op) A -APPLICABLE(oth&-op,state)] 1 4 Via Disjunct Selection on GOAL POS-INST-3(op,state)+ -GOAL(state! I A APPLICABLE(op,state) A GOAL(Successor-state(op,state)) A [WotheropEoperators 1 otherop+op) -APPLICABLE(otherop,state)] 1 4 Via Instantiation of op,Enumeration of -APPLICABLEI POS-INST-4(0P 1 ,state)+- -GOAL(state) A APPLICABLE(OP 1 ,state! A GOAL(Successor-state(OP 1 ,state)) A -APPLICABLE(OP2,state) A -APPLICABLE(OP3,state) A -APPLICABLE(OPN,state) 1J POS-INST-5(0P 1 ,state+ -MATCH(expr-with-no-J,state) A MATCH(Jsin(x)dx,state) A MATCH(expr-with-no-S,Successor-state(OP 1 .state)) A -MATCH(Jk * f (x)dx,state) A -MATCH(Jf 1 (x) . f 2’(x)dx,state) . . A -MATCH(Precondltlons(OPN),state) Via Constraint Propagation I MATCH(expr-with-no-S,Successor-state(OP 1 ,state!) ==> MATCH(f (x)-l-sin(x)dx,statei -i 1 and Redundancy Elimination I POS-INST-6(OP 1 ,state)+- MATCH(f(x)Jsln(x)dx,state! 185 To motivate this operationalization sequence, consider applying means-ends analysis to the task. In order to arrive at an ERL recognizer, it is necessary to apply a technique that translates IRL predicates into ERL predicates. The only such technique available is Definitional Expansion. Among those predicate definitions that translate between the IRL and the ERL are: APPLICABLE(op,state) * MATCH(Preconditions(op). Expression(state)), and GCAL(state! - MATCH(term-with-no-S, Expression(state)). The transformation sequence shown can be viewed as an attempt to re-express POS-INST-0 solely in terms of APPLICABLE and GOAL. This has been accompllshed in POS-INST-4. From this point, it is easy to move into the ERL. The final step of the sequence produces POS-INST-6, which recognizes the application of OP 1 to STATE as a productive operator application whenever Jsin(x)dx appears within the numeric expression contained in STATE. While LEX does not actually represent the operationalizing transforms of Table IV- 1 explicitly, there is a close relationship between the operationalization sequence in Figure IV-2 and its counterpart in LEX. In particular, the process (explained in Section II) that LEX goes through in analyzing a single positive instance can be vlewed as the application of a sequence of operationalizing transforms This sequence changes POS-INST-0 into a very specific ERL recognizer that identifies only the single Instance However, the transformation sequence can then be used a template for the construction of a new sequence leading to a more general recognizer. By using the equivalent of a template, the LEX implementation eliminates much of the search process inherent in operationalization. Selection and expansion of clauses is carried out in accordance with the template, and Constraint Propagation and Redundancy Elimination are automatically invoked after IRL predicates have been translated into the ERL. The use of concept- generalizing transforms is avoided altogether. These pre- compiled decisions have made the LEX operationalization task manageable, and have permitted us to avoid some difficult control issues that arise in the full-blown concept operationalization scenario described in Section 1V.A. V Conclusions The operationalization techniques described in this paper are not particularly novel; similar methods have been applied to automated design and synthesis tasks (see, for example, work in automatic programming [4, 11). What is different about our approach, along with Mostow’s [9], is the explicit application of these techniques in the context of learning and problem solving [7, 61. We are beginning to acknowledge that design activities are intimately related to learning abilities, and that the ability to use one’s knowledge appropriately to achieve a particular goal (i.e to design a solution to a problem) is a fundamental learning skill. It is clear that much work remains to be done in the area of controlllng the search for solutions to the concept operationalization task. In particular, we need to understand how various sources of knowledge can be used not only to guide, but also to justify the operationalizatlon process Justif ication involves the defense of operationalization decisions based on goals, constraints and preferences operating within the concept learning environment Based on the nature of recent research in machine learning, more attention to such environmental factors is central to progress in this field. VI Acknowledgments I would like to thank Tom Mitchell for his significant contribution to the ideas expressed in this paper, and for his help and encouragement. Many of my ideas have also been clarified by discussions with Smadar Kedar-Cabelli. I thank Paul Utgoff for his assistance in implementation and for several careful readings of earlier drafts of this paper. John Bresina, Donna Nagel, Don Smith, and Peter Spool also supplied useful comments 111 c21 c31 L-41 c51 C61 II71 C81 c91 Cl01 Cl 11 II121 References Barstow, D. R., Automatic Construction of Algorithms and Data Structures Using a Knowledge Base of Programming Rules, Ph.D dissertation, Stanford University, November 1977. Carbonell, J. G., Michalski, R. S. and Mitchell, T. M., “An Overview of Machine Learning,” Machine Learning, Michalski, R S., Carbonell, J G and Mitchell, T. M. (Eds.), Tioga, 1983. Hayes -Roth, F., Klahr, P., Burge, J. and Mostow, D. J., “Machine Methods for Acquiring, learning, and Applying Knowledge”, Technical Report R-624 1, The RAND Corporation, 1978. Manna, Z. and Waldinger R., “Knowledge and Reasoning in Programming Synthesis,” Studies in Automatic Programming Logic, Manna, Z. and Waldinger R. (Eds.), North-Holland, 1977. Mitchell, T. M., Utgoff, P. E. and Banerji, R. B., “Learning by Experimentation: Acquiring and Refining Problem-Solving Heuristics,” Machine Learning, Michalski, R. S., Carbonell, J. G and Mitchell, T. M (Eds.), Tioga, 1983. Mitchell, T., “Learning and Problem Solving,” Proceedings of / JCAI - 83, Karlsrhue, Germany, August 1983. Mitchell, T. M. and Keller, R. M., “Goal Directed Learning,” Proceedings of the Second International Machine Learning Workshop, Urbana, Illinois, June 1983. Mostow, D. J., Mechanical Transformation of Task Heuristics into Operational Procedures, Ph.D. dissertation, Carnegie-Mellon University, 198 I. Mostow, D. J., “Machine Transformation of Advice into a Heuristic Search Procedure,” Machine Learning, Michalski, R S., Carbonell, J G and Mitchell, T. M. (Eds.), Tioga Press, Palo Alto, 1983 Utgoff, P. E. and Mitchell, T. M., “Acquisition of Appropriate Bias for Inductive Concept Learning,” Proceedings of the Second National Conference on Artificial Intelligence. Pittsburgh, August 1982. Wlnston, P. H., “Learning Structural Descrlptlons from Examples,” The Psychology of Computer Vision, Winston, P. H. (Ed.), McGraw Hill, New York, 1975, ch. 5. Winston P.H., Binford T.O., Katz B. and Lowry M., “Learning Physical Descriptions from Functional Definitions, Examples, and Precedents,” Third National Conference on Artificial Intel t igence Washington, D.C., 1983. 186
1983
56
251
Episodic Learning Dennis Kibler Bruce Porter Information and Computer Science Department University of California alt Irvine Irvine, California Abstract A system is described which learns to compose sequences of operators into episodes for problem solving. The system incrementally learns when and why operators are applied. Episodes are segmented so that they are generalizable and reusable. The idea of *augmenting the instance language with higher level concepts is introduced. The technique of perturbation is described for discovering the essential features for a rule with minimal teacher guidance. The approach is applied to the domain of solving simultaneous linear equations. 1. Introduction With the aid of a teacher, junior high school students can learn to solve simultaneous linear equations. Operators that are applied in solving these problems include multiplying an equation by a constant and combining like terms. The students are already familiar with & these operators are applied. Moreover, the teacher assumes that the students understand basic concepts about numbers, such as a number being positive, negative, or non-zero. Our system, nicknamed PETr2 incrementally (defined in [7]) induces correct rules from the training instances presented. The rules are correct in the sense that at any point in the learning process: - the knowledge is consistent with all past training instances. - sequences of rules (episodes) are guaranteed to simplify the problem state if they apply. Learning rules for applying operators involves two stages of learning: 1Thi.s research was supported by the Naval Ocean System Center under contract NO6123-81-C-1165. 2 Please refer to [41 for a complete description of our approach including a PDL d;scription of the learning algorithm and multiple examples of its use. PET is implemented in Prolog on Dec2020. Available upon request. - Stage 1 learning involves understanding &.en each available operator should be applied. The concern here is with learning the enabling conditions for individual operators, without knowledge of the other operators in the solution path to provide context. - Stage 2 learning involves understanding J&Y each operator is applied with emphasis on the sequencing of operators. We refer to this as episodic learning. Episodic segmentation is the grouping of operators to form an episode. Episodes are discrete, reusable compnents for plan generation and each simplifies the problem state. The main features of our approach to episodic learning are: - segmentation of operator sequences into meaningful, reusable episodes. - augmentation of the instance language to include higher-order concepts not present in the training instance itself. - perturbation of a training instance to create new instances. 2. Related Work Related work in stage 1 learning includes Neves's [lo] system which learned to solve one equation in one unknown from textbook traces. The system learned both the context (preconditions) of an operator as well as which operator was applied, although the operator had to be known to the system. His generalization language was simpler than ours in that a constant could only be generalized to a variable. The program LEX [8] uses version spaces to describe the current hypothesis space as well as concept trees to direct or bias the generalizations. As it is not the main point of our work, we keep only the minimal (maximally specific) generalization [ll] of the examples. MACROPS [2] is an example of stage 2, or episodic learning. This system remembers robot plans that have been generated so that the plan can be reused without re-generation. The plans are stored in triangle tables which record the 191 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. order of application of operators in the plan and how their pre-conditions are satisfied. The plans are generalized to be applicable to other instances (as are episodes). While effective in learning plans, MACROPS has difficulty applying its acquired knowledge [l]. The central problem is that the operators in a MACROPS plan are not segmented into meaningful sequences. Any sequence of operators can be extracted from the triangle table and reused as a macro operator. A sequence of length N defines N(N-1)/2 macros. However, few of these sequences are useful. MACROPS offers no assistance in selecting the useful sequences from a plan. If sequences are not extracted from the triangle table then the entire plan must be considered an episode. This results in a large collection of opaque, single-purpose, macro operators. Branching within an episode is made impossible. In either case, combinatorial explosign makes planning with the macros impractical. 3. Operators for Solving Linear Fquations The operators applicable to solving simultaneous linear equations are described in the following table: rator ccPnbinex(Eq) combiney(Fq) combinec(Eq) deletezero =-wEql,Eq2) addU%&W) mult @WJ) Semantics Combine x-terms in Eq. Combine y-terms in Eq. Combine constant terms in Eq. Delete term with 0 coeff. or 0 constant from Eq. Replace E42 by the result of subtracting Eql from IQ2 Replace EQ2 by the result of adding Eql and Eq2 Replace IQ by the result of multiplying Q by N 4. Description Languages 4.1. Instance Language The instance language serves as "internal form" for training instances. We adopt a relational description of each equation, so the training instance: a: 2x-5y=-1 b: 3x+4y=lO is stored as: {term(a,2*x),term(a,-53),term(a,l), term(b,3*x),term(b,4*y),term(b,-10)) where a,b are equation labels and x,y are 31t should be recognized that MACROPS was designed to control a physical robot, not a simulation. For this reason, the designers thought it important to permit the planner to skip ahead in a plan if situation permits or to repeat a step in a plan if the operation failed due to physical difficulties. variables in the instance language. 4.2. Generalization Language Following Mitchell [81 and Michalski [61 we have concept trees for integers, equation labels, and variables (figure 4-l). Basically we are using the typed variables of Michalski [61. integer 3 variable / \ /\ non-zero zero / \ \ a b x Y positive negative 0 /I\ /I 1 2 3 ..e -1 -2 . . . Figure 4-l: Concept trees We permit generalizations by 1) deleting conditions, 2) replacing constants by variables (typed), and 3) climbing tree generalization. Disjunctive generalization is allowed by adding additional rules. This covers all the generalization rules discussed by Michalski 161 except for closed interval generalization. 4.3. Rule Language Knowledge is encoded in rules which suggest operators to apply. Rules are of the form: <score> -- <bag of terms expressed in generalization language> => <operator> (The score of a rule is described in section 6.2.) PET does not learn "negative" rules to prune the search tree, as in [3]. 5. Perturbation Perturbation is a technique for stage 1 learning which enables a learning system to discover the essential features of a rule with minimal teacher involvement. A perturbation of a training instance is created by: - deleting a feature of the instance to determine whether its presence is essential. - if a feature is essential, modifying it slightly to determine if it can be generalized. Perturbation operators, which are added to the concept tree used for generalization, make these minor modifications. For example, given the problem (a) 2x+3y=7 (b) 2x+3~-5y=5, the advice to cambinex(b), and an empty rule base, PET first describes the rule as: (term(a,2*x),term(a,3*y),term(a,-7), term(b,2*x),term(b,3*x),term(b,-5*y), term(b,-5)) => combinex(b). Now PET perturbs the instance by modifying each of the coefficients individually. This is done by zeroing, incrementing and decrementing each coefficient. Same of the instances created by perturbation are: 192 (i) (ii) (iii) 3y=7 2x+3y=7 2x+3y=7 2x+3x-5y=5 3x-5y=5 2x+4x-5y=5 Since ccmCnex(b) is effective in example i, PET generalizes (minimally) its current rule conditions with this example yielding the new rule: {tem(aJjcy) ,temb,-7) I term(b,2*x),term(b,3*x),term(b,-5*y), term(b,-5)) => ccmbinex(b). The major effect is to delete the condition on the x-term of equation(a). The operator is not effective for example ii, and the negative information is (in the current system) unused. Generalizing with example iii, the rule becomes: term(b,2*x),term(b,pos(N)*x),term(b,-5"y), term(b,-5)) => combinex(b). And after all positive instances of the operator c&inex(b) have been generated by the perturbation technique and generalized, the rule is formed: term(b,pos(M)*x)) => cambinex(b). Essentially, perturbation is a technique for creating near-examples and near-misses [13] with minimal teacher involvement upon which standard generalization techniques can be applied. Mitchell's LEX system [91 uses a similar method for creating training instances. Note that in our technique we perturb an instance and try the same operator that worked before. The perturbation is used to guide the generalization process. In LEX, perturbation is used to generate new, possibly solvable, problems. Heuristics are needed to select appropriate problems as the cost of applying the problem solver is high. See [5] for more detail on our perturbation technique. 6. Episodic Learning 6.1. Importance for Learning As we noted in the MACROPS use of operator sequences, unless the system can select meaningfully useful sequences from the set of candidate sequences, combinatorial explosion makes reuse of generalized plans infeasible. We define an episode to be a sequence of rules which, when applied, simplifies a problem state. Our episodes are "loosely packaged" to allow branching. Rather than storing an entire plan for reaching a goal state from the start state, we segment the solution path into small, re-usable, generalizable episodes, each accomplishing a simplification of the problem. Episodic, or stage 2, learning is concerned with these sequences of rules and their connections. These sequences are learned incrementally. Learning a rule for an operator depends on an understanding of J&Y the operator is applied. 4 PET understands selecting an operator: tW0 reasons for 1. 2. By applying the operator, the problem state is simplified. In the domain of algebra problems, a state is simplified if the number of terms in the equations is reduced. By applying the operator, the preconditions of an existing rule are satisfied. The rule being formed for the operator is then loosely linked with the rule that the operator enables. If more than one rule is enabled, then multiple branches through the episode are allowed. PET adds a rule to the rulebase when the purpose of the rule's action (the "why" component of the operator) is understood. The first operators for which rules can be learned are those which simplify the problem state, such as combining like terms. Any operators applied before combine cannot be understood and PET must "bear with" the teacher. After rules are formed for the combine operators, subtract can be learned. For instance, sub(a,b) applied to: a: 2x+3y=5 b: 2x+=1 yields: a: 2x+3y=5 b: 2x-2x -ly-3y=l-5 Now PET can learn sub(a,b) for reason (2) above: a rule which is already understood (for a combine operator) is enabled by the subtraction. An episode can be formed connecting the rules for sub(a,b) and combine(b). 6.2. Scoring Operators A simple scoring scheme connects rules into episodes and resolves conflicts when more than one rule is enabled. A natural scheme is to score each rule by its position in an episode. The rules for the ccmbine operators are given a score of 0. The rule for subtract, which enables a ccnnbine operator, is given a score of 1 (O+l). Intuitivelyr the score is the length of the episode before something good happens (i.e. the equations get simplified). When selecting a rule, PET selects the one with the lawest score among those enabled. Ties are resolved arbitrarily. 4 "Understand" is used here to mean "know-how" encoded in production rules. We do not mean to suggest a deep model of understanding which might include causality and analogy. 193 7. Augmentation A description in the instance language is basically a translation of a training instance. This description is in an instance language more appropriate to computation than the surface language used to input the instance. In complex domains, more knowledge needs to be represented than is captured in a literal translation of a training instance into the instance language. In the domain of algebra problems, augmentation serves to relate terms in the instance language. For example, a relevant relation between coefficients is productof(N,M,P) (the product of N and M is P). Augmentation of the instance language is necessary when the terms or values necessary for an operation (the RHS of a rule) are not present in the pre-conditions for the operation (the LHS of the rule). For example, consider the training instance: a: 6x-15y=-3 b: 3x+4y=lO with the teacher advice to apply mult(b,2). The problem is that the rule formed by PET will have a 2 on the RHS (for the operation) but no 2 on the LHS. In this case, we say that the rule is not predictive of the operator. An augmentation of the instance language is needed to relate the 2 on the PHS with some term on the LHS. In this case, the additional knowledge needed is the 3-ary predicate productof, specifically productof(2,3,6). Now the rule to cover the training instance can be formed (after perturbation and augmentation): {term(a,G*x),term(b,3*x), productof(2,3,6)) => mult(a,2) Concepts in the augmentation language form a second-order search space (figure 7-1) for generalizing an operation. The space consists of a (partial) list of concepts that a student might rely on for understanding relations between numbers. When a predictive rule cannot be found in the first-order search space then PET tries to form a rule using the augmentation as well. Concepts are pulled from the list and added to a developing rule. If the concept makes the rule predictive, then it is retained. Otherwise, it is removed and another concept is tried. If no predictive rule can be found then PET ignores the training instance. sumof(L,M,N) (sum of L and M is N) productof(L,M,N) (product of L and M is N) squareof(~,~) (square of M is N) Figure 7-l: augmentation search space Vere [12] has also addressed the problem of learning in the presence of "background information." For example, learning a general rule for a straight in a poker hand requires knowledge of the next number in sequence. This is considered background to the knowledge in the poker domain. Vere describes an 'association chain" which links together each term in a rule. If a term in the rule is not linked in the chain (analogous to our test for predictiveness), then more background information must be "pulled in" until it is associated. Augmentation is similar to selecting background knowledge. One problem with both approaches is determining how much background knowledge to incorporate. Incorporating too little kncwledge, which results in an over-generalized rule, can be detected by an association chain violation or' in PET, by a non-predictive rule. However, detecting when too much knowledge has been pulled in is difficult. In this case, the rule formed will be over-specialized. We overcome this problem (to a large extent) by perturbation. Vere relies solely on forming a disjunction of rules (each overly specialized) for the correct generalization. Vere allows only one concept in the background knowledge. This further simplifies the task of knowing haw much knowledge to pull in. However, as the complexity of problem domains increase, more background knowledge must be brought to bear. Our augmentation addresses some of the problems of managing this knowledge. 8. Examples of System Performance This section discusses highlights from PET's episodic learning for problem solving in the domain of linear equations. 8.1. Example l-Learning Combine The rulebase is initially empty and, as PET learns, rules are added, generalized, and supplanted. PET requests advice whenever the current rules do not apply to the problem state. The teacher presents the training instance: a: 2x+3y=5 b: 2x+4y=6 with the advice sub(a,b). PET applies the operator which yields: a: 2x+3y=5 b: 2x-2x+4y-3y=6-5 PET must understand why an operator is useful before a rule is formed. The operator failed to simplify the equations (in fact the number of terms in the equations went from six to nine) and did not enable any other rules (since the rulebase is empty). PET cannot form a rule and waits for something understandable to happen. The teacher now suggests that combinex(b) be applied, yielding: a: 2x+3y=5 b: Ox+4y-3y=6-5 The number of terms is reduced, so PET hypothesizes a rule. Perturbation tests each term in the equations to determine which are essential and which can be generalized. PET forms the rule: kern'hpos (N) *xl , term(b,neg(M)*x)) => combinex(b) which means: 194 given a problem state, whenever equation b contains an x-term with a positive coefficient and an x-term with a negative coefficient, then combine the two terms. PET is unable to apply current knowledge (i.e. the rule for combinex(b)) to the current problem state so the teacher suggests combiney(b) which yields: a: 2x+3y=5 b: Ox+ly=6-5 Stage 1 learning produces the rule: {term b,pos (N) 9) I term(b,neg(M)*y)) => c&iney(b) This rule cannot be generalized rulelist and is simply added. with the current Learning rules for the operators ccrmbinec(b) and deletezero are similar and will be assumed to be completed. Stage 2 learning of the combine operators involves relating them to episodes, or sequences of rules. Since combine simplifies a problem state immediately, it is given a score of zero. The current rulelist (with scores) is: 0 -- {term(b,pos(N)*x), term(b,neg(M)*x)} => combinex(b) O- {term bps (NJ 9) I term(b,neg(M)*y)) => combiney(b) 0 -- bmb,posW 1 I term(b,neg(M)) => combinec(b) 0 -- (term(b,O*x)) => deletezero 0 -- (term(b,O)) => deletezero With further training instances for the combine operators, PET forms the rules: 0--{term(egn(L),int(N)*x), term(egn(L),int(M)*x))=>ccPnbinex(egn(L)) O-hmn(eqn(L) ,int(N) %I, term(egn(L)lint(M)%)]=>ccmbiney(egn(L)) 0-(term(eqn(L),int(N)), term(egn(L),int(M)) => combinec(egn(L)) 0-(term(egn(L),O*var(X))} => deletezero(egn(L)) 0--(term(egn(L),O)) => deletezero(egn(L)) 8.2. Example a--Learning Subtract Now that the ccmbine operators are partially learned, PET can begin to learn subtract. Since our episodes are loosely packaged, there is not a problem with further generalizing the rules for the combine operators once subtract is affiliated with them. Learning for each operator can proceed independently. Naw the training instance above which PET had to ignore can be understood. The instance with teacher advice sub(a,b) is: 'deletezero (b) could also be suggested, but we continue with a combine operator for continuity. a: 2x+3y=5 b: 2x+4y=6 The operator enables the partially-learned rule for combinex(b), so PET employs perturbation to form the rule: {term(a,2*x),term(b,2*x), term(b,pos(N)*'y) 1 => sub(a,b) The rule is assigned a score of 1, loosely linking it with the rules for combine and deletezero. Presented with induces the rule: further training instances, PET (term(egn(Ll),nonzero(N)*var(X)), term(eqn(L2),nonzero(M)*var(X)), term(egn(L2),nonzero(O)*var(Y))) => =Jweqmu ,eqnW) 1 8.3. Example 3-Learning Multiply The teacher presents PET with an example of multiply with the training instance: a: 3x+4y=7 b: 6x+2y=8 No rule in the kncwledge base applies, so PET requests advice. mult(a,2) is suggested which yields: a: 6x+8y=14 b: 6x+2y=8 Now sub(a,b) applies so the rule for mult(a,2) can be learned. Mult(a,2) is given a score of 2 (one more than the score of the rule enabled). After perturbation PET forms the rule: {term(a,3*x),term(b,6*X)r term(b,pos(N)%)) => mult(a,2) At this point PET realizes that it has over-generalized since the rule is non-predictive (the 2 on the RHS does not occur on the LHS). PET augments the instance description and forms the candidate rule: (term(a,3*x),term(b,6*x),term(b,pos(N)%), productof(2,3,6)} => mult(a,2) After additional examples, PET forms the correct rule: 2 -- (term(a,pos(K)*x),term(b,pos(L)*x) I termbpos 04 9) r productof(pos(N) ,pos(K) ,pos(L) 1 => mult(a,pos(N)) which supplants rulebase. the more specific rule in the 8.4. Example 4-Learning "Cross Multiply" The teacher presents the training instance: Since awly a: 2xt6y=8 b: 3x+4y=7 no rule is enabled, mult(a,3) yields: the teacher advice to a: 6x+18y=24 b: 3x+ 4y=7 195 Since mult(b,2) is enabled, mult(a,3) can be learned. After perturbation, PET acguires the rule: (term(a,2*x),term(b,3*x),term(b,pos(N)~)] => mult(a,3) This rule is given a score of 3 since it enables a rule with score 2. The rule will be generalized (with subsequent training instances) to: 3 -- {term(egn(I),nonzero(N)*var(X)), term(egn(J),nonzero(M)*var(X)), term(eqn(J),nonzero(L)*var(Y))] => mult(egn(I),nonzero(M)) 9. Limitations and Extensions As with most learning programs we require that the concept to be learned be representable in our generalization language. In addition PET has to be supplied with some coarse notion of when an operator has been effective in simplifying the current state. Furthermore we assume teacher gives only appropriate advice no "noise." Extensions that we are considering - Learning from negative instances as positive ones. that the and there is are: as well - Improving the use of augmentation by introducing structured concepts. These would permit climbing tree generalizations for this second-order knowledge. Another improvement would be allowing multiple concepts to be pulled into a rule from the augmentation search space. This requires a requisite change in the test for predictiveness. - Applying the theory to learning operators in other domains. Integration problems have been attempted [8]. We would like to try our approach in the calculus problem domain. 10. Conclusions A system, PET, has been described which learns sequences of rules, or episodes, for problem solving. The learning is incremental and thorough. The system learns when and why operators are applied. Although PET starts with an extremely general and coarse notion of why an operator should be applied it's representation beccmes increasingly fine and complete as it forms rules from examples. PET detects when rules are non-predictive and augments the generalization language with higher level concepts. Due to the power of perturbation, our system can learn episodes with minimal teacher interaction. The episodes are segmented into discrete, re-usable segments, each accomplishing a recognizable simplification of the problem state. The approach is shown effective in the dcnnain of solving simultaneous linear equations. We suspect the technique will also work for solving problems in symbolic integration and differential equations REFERENCES 1. Carbonell, J.G. Learning by Analogy: Formulating and Generalizing Plans from Past Experience. In Michalski,R.S., Carbonell,J.G., Mitchell,T.M., Ed., Machine m, Tiogo Publishing, 1983. 2. Fikes, R.E. and Nilsson, N.J. STRIPS: A new approach to the application of theorem proving to problem solving. AL 2 (1971), 189-208. 3. Kibler, D.F, and Morris, P.H. Dont be Stupid. IJCAI (1981), 345-347. 4. Kibler, D.F. and Porter, B.W. Episodic Learning. 1983. 194, University of California, Irvine, 5. Kibler, D.F. and Porter, B.W. Perturbation: A Means for Guiding Generalization. m (1983). 6. Michalski, R.S., Dietterich, T.G. Learning and Generalization of Characteristic Descriptions: Evaluation Criteria and Comparative Review of Selected Methods. mfi (1979), 223-231. Michalski,R.S., Carbonell,J.G., Mitchel1,T.M. kchine . ng Tiogo Publishing, 1983. 8. Mitchell, T.M ., Utgoff, P.E., Nudel, B, and Banerji, R. Learning Problem-Solving Heuristics Through Practice. um 1 (1981), 127-134. 9. Mitchell, T.M., Utgoff, P.E., Nudel, B, and Banerji, R. Learning by Experimentation: Acquiring and Refining ProblePn- Solving Heuistics. In Michalski,R.S., Carbonell,J.G., Mitchell,T.M., Ed., Machine karninq, Tiogo Publishing, 1983. 10. Neves, D.M. A computer program that learns algebraic procedures by examining examples and working problems in a textbook. CSCSI u (1978), 191-195. 11. Vere, S.A. Induction of concepts in the predicate calculus. IJCAI 4 (1975), 281-287. 12. Vere, S.A. Induction of Relational Productions in the Presence of Background Information. IJCAT 5 (1977), 349-355. 13. Winston, P.H. Learning structural description from examples. In Winston, P.H., Ed., a Psvcholouy of Comouter Vision, McGraw-Hill, 1975. 196
1983
57
252
OPERATOR DECOMPOSABILITY: A NEW TYPE OF PROBLEM STRUCTURE I)cparrmcnt of Computer Scicncc Carncgic-Mellon University Pittsburgh, Pa. 1.5213’ Abstract ‘This paper dcscribcs a structural property of problems that allows an efficient strategy for solving a large number of problem instances to be based on a small amount of knowlcdgc. Specifically, the property of o)~emlor ~~~~o/)?l~osabili/~~ is shown to bc 21 sufficient condition for the cffcctivc application of the Macro Problem Solver, a method that rcprcscnts its knowlcdgc of a problem by ;I small number of operator scqlKnccs. Roughly, operator decomposability exists in a problem to the cxtcnt that the cffcct of an operator on cuch component of a state can bc cxprcsscd as a function of only a subset of the components, indcpcndcnt of the remaining state components. 1. Int reduction Fundamentally, learning is conccrncd with situations whcrc WC arc intcrcstcd in solving many instances of a problem, rather than just one instance. In that cast. it may bc advantageous to first learn a gcncral strategy for solving any instance of the problem, and then apply it to each problem instance. This allows the computational cost of the learning stage to bc amortized over all the probicm instances to bc solved. Such an approach will only bc useful if thcrc is some structure to the collection of prohlcm instances such that the fixed cost of learning a single strategy plus the marginal cost of applying it to each problem instance is less than the cost of solving each instance from scratch. This paper presents one such structural property, that of opern/or riefc,ot,lf)osabilify. Operator decomposability is a sufficient condition for the success of macro problem solving, a lcr\rning and problem solving method first dcscribcd in [2]. 2. Macro Problem Solving WC begin with a brief description and cxamplc of macro problem solving taken from [2]. ‘I’hc rcadcr familiar with that work may skip this section. Macro problem solving is partly based on the work of Sims [4] and Bancrji (11. Complctc details of the problem solving and learning programs can be found in [3]. 1 author’s current address: New York, N.Y. 10027 Dcparlmcnt of Computer Sclcncc. Columbia Unikcrsity. This rcscarch was sponsored by the Dcfcnsc Advanced Research Projects Agency (DOD). ARPA Order No 3597, and monitored by the hlr I‘orcc Avionics laboratory under Contract 1:33615-81-K-1539. The VICWS and conclusions in 0-11s document arc those of the author and should not bc interpreted a$ rcprcscnting the official politics, tither cxprcsscd or lmplicd, of the Dcfcnsc Advanced Rcscarch Prcyccts Agency or the U.S. Go\cmment. ‘I’hc Macro Problem Solver is a program that can cfficicntly solve a number of problems, including the Eight Pu//lc, liubik’s Cube, and the Towers of I-lanai, without any search. For simplicity, WC will conridcr the Eight Pu/Ac as an cxamplc. ‘I’hc problcrn solver starts with a simple set of ordcrcd subgoals. In this case, each subgoal will bc to move a particular tilt to its goal position, including the blank “tile”. ‘I’hc operators to bc used arc not the primitive operators of the problem space but rather scqucnccs of primitive operators called ill(/cro-(~l)erCI/OrS or jnacros for short. Each macro has the property that it achicvcs one of the subgoals of the problem without disturbing any subgoals that have been previously achicvcd. Intcrmcdiate states occurring within the application of a macro may violate prior subgoals, but by the end of the macro all such subgoals will have been restored. and the next subgoal achicvcd as well. The macros arc lcarncd automatically by a macro learning program. ‘l‘hcy arc organized into a two-dimensional matrix cnllcd a macro /able. ‘I’ablc 2-1 shows a macro table for the Eight PuAc, corresponding to the goal state shown in Figure 2-1. A primiti\,c mo~c is rcprcscntcd by the first letter of Right, I,cft, Up. or Down. This is unambiguous since only one tilt. excluding the blank, can bc moved in cnch direction in a given state. F’ach column contains the macros ncccssarq to move one tilt to its correct position from any possible initial position without disturbing previously positioned tilts. ‘I’hc order of the columns give5 the solution order or the scqucncc in which the tiles arc to be positioned. ‘I’hc rows of the table correspond to the different possible starting positions for the next tilt to bc placed. Figure 2-1 also gives the names of the diffcrcnt tile posi:ions. 12 3 8 4 7 6 5 Iiigurc 2-1: I’ight PuAc Goal State The algorithm used by the Macro Problem Solver is as follows: First, the position of the blank is dctcrmincd, that position is used as a row index ;nto the first column of the macro tAlc, and the macro at that location is applied. This moves the blank to its goal position. which is the ccntcr in this case. Next. the number 1 tilt is located, its position is used as a row index into the second column. and the corresponding macro is applied. This moves tllc 1 rile to its goal position and also returns the blank to the ccntcr. ‘l’hc macros in the third column move the 2 tilt into its goal position while Ica~ing the blank and 1 tiles in their proper places. Similarly, the 3, 4, 5, and 6 tilts XC positioned by the macros in their corresponding columns. At this point. tiles 7 and 8 must bc in their go~1 positions or clsc the pu/~lc cannot bc sol\ cd. ‘l‘hc lower From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. triangular iornl of oh\: t:lbic is due to rhc f,~cr th,~t ;F, tilts .IIY moved to their 2:>,11 pocition\. !h:rc arc fcwcr pAtions th,lt the rcm,lining tiles can occ~lpy. In tiidtlition to tlls ITight l’~u/irlc, this technique h,ls been applied to tllc 1 iftecn l’u/Ac. Ru+ik’s Cube. the ‘I‘owcrs c:f llan!)i problem. and the ‘l’hink-A-l)c;t Inachinc. ‘I‘hc key fcatulc that m:ikcs this method uscfi~l iz that only ;: \m.ll! number of macros ‘II’C‘ nccdcd in order to cfficicntly ~01~~ -, . i a large numhcr of problem instances. In the cast of the Fight l~uz~lc, all 18 1.410 problem instances can be sc:lvcd without an) starch using onl} 35 n)acl’os. Similarly, for Rubik’s Cube, al] 4~10’~ problem instances can hc solved without search using only 238 macros. l’hc rcmaindcr of’ this paper prcscnts the struc.tural property of these problems that makes this savings possible. We hcgin with an abstract rcprcscniation oftllc. cx,~nil~lc problems. 3. State-Vector Representation A htatc of a probicln is qccificd by givirlg the Villllcs of a Vector of SLitC V,lri,lhlcs. t:or Cxamplc. the :;tatC V;lri;lbl~~s for [hc I:ighl t’u/./]c are tltc nine diffcrcnt tilt\ of’thc pu//lc, includin~~ b the blank. ;rnd [hc values ;IrL‘ the positions occupied by cnch tilt in ;i particular st;~tc. l.or l{ubik’s C~lbc, i!lc ~ariablcs ,Irc the ditli:rcIlt indi\,idual m!)\ ;ihic picccs, cal]cd ~~tbif\, ;tIld [IIC \aluc~ cncodc h~~[h the positions of [hc cubits and their c~ric:iL!lion. In the case of the Towers of Iianoi, L!X \;iri,lb]cs arc the disk<. ;~nd the \alucs ;:rc the !,cgs that the disks arc on. For cnch problem, a single goal state is spccificd by assigning particular values to the state variables, called their goal values. l:urthcrmorc. WC include in the problem space only those states which arc solvnble in the scnsc that there exists a path from each state to the goal state. A problem instance then bccomcs a combination of a problem and a particular initial state. .l‘hc operators of the problem space arc functions which map state vectors into state vectors. Formally, al! operators iIrC total functions in that they apply to all states. WC adopt the convention that if a state does not satisfy the preconditions of an operator. then the cffcct of t!lat operator on that state is to lcavc it unchapgcd. 4. Mac ro Table Definition When WC cxaminc the macro &lc for the Eight Pu~lc (‘l‘ablc 2-l), WC notice th,lt the first column contains eight cntricc. ‘l’hcrc is one macro for each possible position that the bl‘lnk could occupy (other than its gc~11 position) in t!lc initial state, or one macro for c~h possible \a!uc of the first stntc variable. ‘!‘hus. the choice of whdt m,Icro to apply first dcpcnd\ only on the \iIIuC of the first stale VJriiiblC. Awt!lCr way of looking at this is that for a given ~aluc of the first stntc variable, the salnc macro will map it to its t,lrgct ~nlue rcg,ird!css of the villucs of the remaining state \ariablc5. In gcncr,l!. this property would not hold for an arbitrary problem. In fact, in the worst case, one’ would need a diffcrcnt macro in the first column for each diffcrcnt initial state of tile problem. In the second column as well, wc only need one macro for each possible value of the second state variable (the positions of the 1 tilt). Again, this is due to the fact that its application is indcpcndcnt of the values of all succeeding variables in the solution order. Similarly, for the remaining columns of the table, the macros dcpcnd only on the previous state variables in the solution order and arc indcpcndcnt of the succeeding variables. MO~C formi~lly. let S 1~~ the set of all st;ltcs in the prohlcm spncc, ]ct S, bc the set of ail \titcs in uhicll the first i- 1 st,ltc \,trinblcs qua! their gOi V~IIUCS. illld ICt S, lx t!lC SllbSCt of S, in which the i/Ii state YLll'iilbk !laS ValUCj. '!‘!lCIl, WC Cdl1 dcfinc a macro table iIS follows: Ninilion I : A I?lncI’o /crDlr is a SCt of ITlilCrOS mu such that Operator decomposability is the property that allows macro tirblcs to exist. I-or pedagogical reasons, WC first prcscnt a special cast of operator decomposability called /o/o/ dcccl,,lposaDilif~,. 5. Total Decomposability Gibcn that each state is a vector of the form (s,,.s,, . . . ,sn), WC define total dccompos,lbility as follows: Ikfinition 2: A vector function f is /o/nl/~~ tfccot~~~~sable if thcrc exists a scalar functionfsuch that &ES, f(s)=f(s,.s;, . . . ,sn)=(fl.q)9f(sj)?. . . A%,)). TILES 0 1 2 3 4 5 6 0 1 UL P 2 U RDLU 0 S 3 UR DLURRDLU DLUR I T 4 R LDRURDLU LDRU RDLLURDRUL 0 5 DR ULDRURDLDRUL LURDLDRU LDRULURDDLUR LLURD N S 6 D URDLDRUL ULDDRU URDDLULDRRUL ULDR RDLLlJlJRDLDRRUL 7 DL RULDDRUL DRUULDRDLU RULDRDLULDRRUL URDLULDR ULDRURDLLURD URDL 8 L DRUL RULLDDRU RDLULDRRUL RULLDR ULDRRULDLURD RULD ‘I‘:~blc 2-I: Milcro ‘l’ablc for the Eight Pu//lc 207 ‘l’his property can bc illustrated by the opcrarors of liubik’s Cube. Recall that the state variables arc the individual cubits and the values cncodc their positions and oricntntions. Each opcmtor will affect some subset of the cubits or state variables. and Icavc the rcmnining state vari&lcs unchanged. f lowcvcr, the resulting position and orientation of each cubic as a result of any operator is solely a function of that cubic’s position and oricnt&ion bcforc the operator wa\ applied, and indcpcndcnt of the posltions and orientations of the other cubits. Since it can easily bc shown that the composition of two totally dccomposnblc functions is also totally dccomposablc, WC state the following lemma without proof: I,cnlm;~ 3: A macro is totally dccolnposablc if each of the operators in it arc totally dccomposablc. Next, wc dcfinc total decomposability as a property of a problem space as opposed to just a function. Ikfinition 4: A problem is totally decomposable if each of its primitive operators is totally decomposable. Finally, WC prcscnt the main result of this section, that total dccotnpos~tbility is ;I sufficient condition for the cxistcncc of a macro table. ‘I’heorcn~ 5: If a problem is t(Jtijlly dccomposablc. then thcrc exists il I?lilCI’O tilblc for lb.2 problem. WC will omit the formal dctnils of the proof [3] and prcscnt only its outline. ‘I‘hc basic argument is thilt for CiKh pair iJ WC11 thilt S, is IlOt cmpt}, thcrc must bc sonic macro which maps some clcmcnt of S,, to the goili state since thcrc is a path from cvcry state to the goill. Since the gOa StiltC is 311 clcmcnt Of S, + 1 for any i. this mijcro maps an clcmcnt of S, to an clcmcnt of S,, ,. ‘I’hcn. the total dccomposnbility property is used to show that this macro maps UHJ’ clcmcnt of S, to an clcmcnt of S,+ ). and hcncc qualifes aS mti 6. Serial Decomposability ‘i’hc small number of macros in the macro table is due to the fact that the cffcct of a macro on a state varinblc is indcpcndcnt of the succcrdiug variables in the solution order. Howcvcr, the cffcct of a m;iCro on a state variable need not bc indcpcndcnt of the preceding \ ;1ri,blcs in the solution order, since thcsc viIllICS arc known when the macro is applied. ‘I his suggests that a wcakcr and more gcncral form of operator dccompositbility would still admit a macro table. lhis is the cast with the Eight Public, the Think-a-l>ot problem, and the Towers of Hanoi problem. liccall that in the Eight Puzzle, the state variables correspond to the different tiles, including the blank. Each of the four operators (Up, Down. I.cft. and Right) affect exactly two state variables, the tilt they mo\c and the blank. While the cffccls oh each of thcsc two tiles arc totalllq dccomposnblc, the prccorldi/iorrs of the opcr<ltors arc not. Note that while thcrc arc no preconditions on any operators for liubik’s Cube. i.c. all operators ;lrc always ,Ipplicablc, the I-&ht Pu~/lc operators must satisfy the precondition that the blank bc adjilccnt to the tile to be mc?vcd and in the direction it is to bc moved. ‘I’hus. whcthcr or not an operator is i~pplicablc to a particular tilt ~ilriirblc dcpcnds on whcthcr the blank variable has the correct value. In order for an operator to bc t,otally dccomposablc. the decomposition must hold preconditions and the postconditions of the operator. for both ‘I‘hc obvious solution to this problem is to pick the blank tilt to be first in the solution order. ‘I’hcn, in all succeeding staglgcs the position of the blank will bc known and hcncc the dcpcndcncc on this variable will not affect the macro table. ‘I‘hc net result of this wcakcr form of operator dccomposnbility is that it places a constraint on the possible solution orders. ‘I‘hc constr;lint is that the 4tdtC Vill?;lhlCS must 1x2 ordcrcd SUCK that L~c prccondilions i\Ild the cffccts of CiIch opcr,ltor on C,K~ St‘ltC virri‘tblc depend only 011 th:lt \,rr-iablc and prcccding state vi~riablcs in the solution order. If SUCK i1n ordering exists, WC say that thC OpCriltOl5 exhibit scricll dcc,ottil,c)Jtrbi///}‘. In the CiISC of the I’ight l’u//lc, the constraint is simply that the blank must occur first in the solution order. What follows is a more formal trcatmcnt of serial decomposability. The prcscntation exactly p:irallcls that of total decomposability. Given that a solution order is a permutation of the state variables, and that the numbering of the state variables coincides with the solution order, WC dcfinc serial decomposability as follows: Ikfinition 6: A vector function f is scriillly dccomposablc with rcspcct to a solution order if dicrc exists a set of functions f, such that VSES, f(s)=f(s,,s, 1 - - * ,sn>=(f,(s,),f*(s,,s*), . . . ,f,(s,,s,, . . . ,sJ) As in the cast of toM decomposability, if all the operators in a macro arc serially dccomposablc, then the macro is serially dccomposi\blc. WC dcfinc a problem to bc serially dccomposablc if thcrc exists some solution order such that all the primitive operators arc serially dccomposablc with rcspcct to this solution order. Finally, WC arrive at the main result of this paper: ‘I’hcorcm 7: If a problem is serially decomposable, thcrc exists 3 macro table for the problem. ‘I‘hc proof is cntircly analogoG to that for the total decomposability thcorcm. Note that total decomposability is mcrcly 3 spccinl cast of serial decomposability. While the Eight Puz/Ic is the simplest cxamplc of a serially dccomposdblc probIcm. the ‘f‘hink-a-l)ot problem exhibits a much richer form of serial decomposability that results in a more complex constraint on the solution order. Think-a-l>ot is a toy which involves dropping marbles through gatcd channels and obicrvmg the effects on the gates. Figure 6-l is a schcmiltic diagram of the dcvicc. ‘I‘hcrc arc three input channels at the top, labcllcd A, 13, and C, into which marbles can be dropped. When a marble is dropped in, it falls through a set of channels govcrncd by eight numbcrcd gates. Each gate has two states. Left and Right. When a marble encounters a gate, it goes left or right dcpcnding on the current state of the gate and then flips the gate to the opposite StiItC. A state of the m‘lchinc is spccificd by giving the StatCS Of CilCh Of thC @tCS. ‘I‘hc problem is lo get from any arbitrar) initial StiltC t0 SO17lC gOill state, SUCh ;lS illi &ltCS pointing I .Cft. A B C 0 1 2 H 3 4 5 6 7 I;igurc 6- 1: Think-a-l)ot Machine ‘Ihc state cariablcs of the problem arc the individual gates, and the values arc Right and I.cft. ‘l‘hc primitive operators arc A, 13, and C, corresponding to dropping a marble in wch of the input channels. ‘l‘ablc 6-l shows a macro table for the Think-a-I)ot problem whcrc the goal state is all gates pointing I.cft. Note that thcrc arc only two possible lalucs for each State variable, one of which is the goal state, and hcncc only one macro in each column. ‘l‘hc last gate in the macro table is gate 6 since once the first scvcn gates arc set, the state of the last gate is dctcrmincd, due to a situation similar to that of the Eight Puzzle. 1 2 GATES 3 4 Right A B C AA CC AAAA CCCC Left ‘1’aI)lc 6-l: Macro ‘I‘ablc for Think-a-I)ot Machine I<oughly, the cffcct of an operator on a particular g&c can dcpcnd on the values of the gates abocc it. Marc precisely, the cffcct of an operator on a particular gate dcpcnds only on the \alucs of all of its “ancestors”, or those gates from which thcrc exists a dircctcd path to the given gate. Thus, the constraint on the solution order is that the ancestors of any gate must occur prior to that gate in the order. The serial decomposability structure of this pmblcm is directly cxhibitcd by the directed graph structure of the machine, and is based on the cffccts of the operators rather than their preconditions, since thcrc arc no preconditions on the Think-a-Ijot operators. An cxtrcmc case of serial decomposability occurs in the Towers of Hanoi problem. Note that in this context the problem is to move all the disks to the goal peg from a/l)) legal initial state. ‘l’ablc 6-2 shows a macro table for the three-disk Towers of Hanoi problem, whcrc the goal peg is peg C. ‘I’hc state variables arc the disks, numbered 1 through 3 in increasing order of si/c. ‘I’hc values arc the diffcrcnt pegs the disks could bc on. li~bcllcd A, 13, and C. ‘I’hcrc arc six primitive moves in the pr&lcm spi\cc. ow corresponding to each possible ordcrcd pair of source peg alld destination peg. Since only the top disk on a peg can bc moved, this is an unambiguous rcprcscntntion of the operators. ‘l’hc cornplcLc set is thus {AH, AC, DA. UC, CA, Cl3). DISKS 1 2 3 ‘l‘hc applicability of each of the operators to each of the disks dcpcnds upon the positions of all the smaller disks. In particular, no smaller disk may bc on the same peg as the disk to bc mo\cd, nor may a smaller disk bc on the destination peg. ‘l‘his totally constrains the solution order to bc from mallcst disk to 1,lrgcst d&k. WC dcscribc this as a boundary cast since it exhibits the maximum amount of dcpcndcncc possible without violating serial decomposability. Operator dccomposnbility in a problem is not only a function of the problem. but dcpcnds on the particular formulation of the problem in terms of state varinblcs as well. For cxamplc, under a dual rcprcscntation of the Fight Pu~ylc, whcrc State \arinblcs correspond to positions and values correspond to tiles, the operators arc not dccomposablc. ‘I‘hc rcitson is that thcrc is no ordering of the positions such that the cffcct of wch of the opcr,rtors on each of the positions can bc cxprcsscd as a function of only the previous positions in the order. 7. Conclusions Operator decomposability is a newly discovered property of problem spaces that holds for a number of cxamplc problems. It is a sufficient condition for the application of macro problem solving and learning, since it allows an cfficicnt solution for a large number of problem instances to bc based on a small amount of knowlcdgc, or a small number of macros. Acknowledgements I would like to ,lcknowlcdgc milny helpful discussions concerning this rcscarch with I lcrbcrt Simon. Allen Ncwcll, I<an,rn Ihncrji, and Jon Ihtlcy. In addition, W,rltcr van Roggcn provided helpful criticism on a draft of this paper. References 1. Ranan 13. Ihncrji. GI’S ,Ind ~11~ ~S~CIUJIO_SY of the liubik cubist: A study in rcnsoning about actions. In Ar-/ifjciol~~xl iluttrotr lrr~cl/i~et~ce. A. i~lithorn and I<. Dancrji, F.ds., North-l lolhnd, Amsterdam, 1983. 2. Km-f. 1t.l-I. A program that lcarn~ to to solve Rubik’s Cube. Proceedings of the National Confcrcncc on Artificial Intclligcncc, Pittsburgh, Pa., August, 1982, pp. 164-167. 3. Korf. I<.}‘. / rnnritlg lo ,S‘o/vr /‘roh/ett?s 0~~ S’carchitig for Alacro-Opcrzllorlr. 1’ll.l). ‘I‘ll., lkpill~tlllCllt of Computer Science, Carnegie-Mellon University, June 1983. 4. Sims, Charles C. Computational methods in the study of permutation groups. In Co~~ipululrotial l’roblo~ir in ADsIrmI Algebra, John I .ccch, l-Zd..Pcrgamon Press, New York, 1970. pp. 169-183. P A AC CB AC BC CA CB AB AC BA BC AC E G B BC CA BC AC CB CA BA BC AB AC BC ‘I’;~blc 6-2: Macro table for ‘l‘hrcc IXsk ‘I‘owcrs of Clanoi Problem 209
1983
58
253
Why AM and Eurisko Appear to Work Douglas B. Lenat Heuristic Programming Project Stanford University Stanford, Ca. ABSTRACT Seven years ago, the AM program was constructed as an experiment in learning by discovery. Its source of power was a large body of heuristics, rules which guided it toward fruitful topics of investigation, toward profitable experiments to perform, toward plausible hypotheses and definitions. Other heuristics evaluated those discoveries for utility and “interestingness”, and they were added to AM’s vocabulary of concepts. AM’s ultimate limitation apparently was due to its Inability to discover new, powerful, domain-specific heuristics for the various new fields it uncovered. At that time, it seemed straight-forward to simply add Heuretics (the study of heuristics) as one more field in which to let AM explore, observe, define, and develop. That task -- learning new heuristics by discovery -- turned out to be much more difficult than was realized initially, and we have just now achieved some successes at it. Along the way, it became clearer why AM had succeeded in the first place, and why it was so difficult to use the same paradigm to discover new heuristics. This paper discusses those recent insights. They spawn questions about “where the meaning really resides” in the concepts discovered by A?/I. This leads to an appreciation of the crucial and unique role of representation in theory fomlation, a role intolling the relationship bet\% een Form and Content. What AlI Really Did In essence, AM was an automatic programming system, whose primitive actions produced modifications to pieces of Lisp code, predicates which represented the characteristic functions of various math concepts. For instance, AM had a frame that represented the concept LIST-EQUAL, a predicate that checked any two Lisp list structures to see whether or not they were equal (printed out the same way). That frame had several slots: NAME: LIST-EOUAL IS-A: ( GEN'L: SPEC: I FAST-AI-G: ( RECUR-ALG: ( PREDItATE FUNCTION OP BINARY-PREDICATE BINARY-FUNCTION BINARY-OP ANYTHING) SET-EQUAL BAG-EQUAL OSET-EQUAL STRUC-EQUAL) LIST-OF-EQ-ENTRIES LIST-OF-ATOMS-EQUAL LAMBDA (x y) (EQUAL x y)) EQ) LAMBDA (x y) (COND If"" ,6",',"" xl (ATOM Y)) (EQ x Y)) . \ (LIST-EQUAL (CAR x) (CAR y)) DOMAIN: fk:S~-VLA:SJE (LIST-EQUAL (CDR x) (CDR y)))))) RANGE: WORTH: 720 John Seely Brown Cognitive and Instructional Sciences Xerox PARC Palo Alto, Ca. Of central importance is the RECUR-ALG slot, which contains a recursive algorithm for computing LIST-EQUAL of two input lists x and y. That algorithm recurs along both the CAR and CDR directions of the list structure, until it finds the leaves (the atoms), at M’hich point it checks that each leaf in N is identicallq equal to the corresponding node in 1. If an\ recursive call on LIST-EQUAL signals KIL, the entiie result is KlL, otherwise the result is T. During one NM task, it sought for examplss of LIST-EQUAL in action, and a heuristic accomodated by pickin random pairs of examples of LIST, plugging them m for x and y, and running the algorithm. Needless to say, very’ few of those executions returned T (about 2%, as there i\ere about 50 examples of LIST at the time). Another heuristic noted that this was extremely IOM (though nonzero), so it might be worth defining new predicates by slightly- generalizing LIST-EQUAL; that is, copy its algorithm and weaken it so that it returns T more often. When that task was chosen from the agenda, another heuristic said that one way to generalize a definition with two conjoined recursive calls was simply to eliminate one of them entirely, or to replace the AND by an OR. In one run (in June. 1976) AM then defined these three new predicates: ’ L-E-1: (LAMBDA (x y) (COND ItOR (ATOM x) (ATOM y)) (EQ x Y)) (L-E-l (CDR x) (CDR y))] L-E-3: (LAMBDA (x y) (COND [{OR ((OARTOM x) (ATOM Y)) (EQ x Y)) (L-E-3 (CAR x (CAR y (L-E-3 (CDR x (CDR y 1 ] The first of these, L-E-1, has had the recursion in the CAR direction removed. All it checks for now is that, when elements are stripped off each list. the tKo lists become null at exactI\ the same time. That is, L-E-l is noM the predicate be might call Same-Length. The second of these, L-E-2, has had the CDR recursion removed. When run on tM.0 lists of atoms, it checks that the first elements of each list are equal. When run on arbitrary lists, it checks that they have the same number of leading left parentheses, and then that the atom that then appears in each is the same. The third of these is more difficult to characterize in words. It is of course tnore general than both L-E-l and L-E-2; if x and y are equal in length then L-E-3 would 236 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. return T, as it would if they had the same first element, etc. This disjunction propogates to all levels of the list structure. so that L-E-3 would return true for x = (A (B C D) E F) and y = (Q (B)) or even y = (Q (W X Y)). Perhaps this predicate is most concisely described by its Lisp definition. A few points are important to make from this example. First, note that AM does not make changes at random, it is driven by empirical findings (such as the rarity of LIST-EQUAL returning T) to suggest specific directions in which to change particular concepts (such as deciding to generalize LIST-EQUAL). However, once haking reached this eminently reasonable goal, it then reverts to a more or less syntactic mutation process to achieve it. (Ch g an ing AND to OR, eliminating a conjunct from an AKD, etc.) See [Green et al., 741 for background on this style of code synthesis and modification. Second, note that all three derived predicates are at least a priori plausible and interesting and valuable. The! are not trivial (such as al\ia>s returning T, or al\i,ays returning !j hat LIST-EQUAL returns), and et en the strangest of them (L-E-3) is genuinely worth exploring for a minute. Third, note that one of the three (L-E-2) is familiar and useful (stime leading element), and another one (L- E-l) is familiar and of the utmost significance (same length). AM quickly derived from L-E-l a function we would call LESGTH and a set of canonical lists of each possible length ( ( ), (T), (T T), (T T T), (T T T T), etc.: i.e., a set isomorphic to the natural numbers). By restricting list operations (such as APPEND) to these canonical l+ts, AM derived the common arithmetic functions (in this case, addition), and soon began exploring elementary number theory. So these simple mutations sometimes led to dramatic discoveries. This simple-minded scheme worked almost embarassingly well. Why was that? Originally, we attributed it to the power of heuristic search (in defining specific goals such as “generalize LIST-EQUAL”) and to the density of worthwhile math concepts. Recently, we have come to see that it is, in part, the density of worthwhile math concepts as represented in Lisp that is the crucial factor. The Significance of AN’s Representation of Math Concepts It was only because of the intimate relationship between Lisp and Vlathematics that the mutation operators (loop unwinding, recursion elimination, composition, argument elimination. function substitution, etc.) turned out to j ield a high “hit rate” of \,iable, useful new math concepts when applied to prei iousl!--known, useful math concepts-- concepts represented as Lisp functions. But no such deep relationship existed between Lisp and Heuretics. and 15 hen the basic automatic programming (mutations) operators N ere applied to viable, useful heuristics, they almost alwal s produced useless (often worse than useless) new heuristic rules. To rephrase that: a math concept C was represented in AM by its characteristic function, which in turn was represented as a piece of Lisp code stored on the Algorithms slot of the frame labelled “C”. This would typically take about 4-S lines to write down, of which only 1-3 lines were the “meat” of the function. Syntactic mutation of such tiny Lisp programs led to meaningful, related Lisp programs, which in turn lvere often the characteristic function for some meaningful, related math concept. But taking a two-page program (as many of the AV heuristics were coded) and makmg a small syntactic mutation is doomed to almost alwa!Vs giving garbage as the result. It’s akin to causing a point mutation in an organism’s DKA (by bombardins it with radiation, say): in the case of a very simple mlcroorganism, there is a reasonable chance of producing a triable, altered mutant. In the case of a higher animal, however, such point mutations are almost universally deleterious. We pay careful attention to making our representations fine-grained enough to capture all the nuances of the concepts they stand for (at least, all the properties we can think of), but we rarely worry about making those representations too flexible, too fme- grained. But that is a real problem: such a “too-fine- grained” representation creates syntactic distinctions that don’t reflect semantic distinctions -- distinctions that are meaningful in the domain. For instance, in cpdin$ a piece of knov,ledge for MYCIN, in u7hich an lteratlon was to be performed, it was once necessary to use several rules to achieve the desired effect. The ph),sicians (both the experts and the end-users) could not make head or tail of such rules indi\iduallj-, since the doctors didn’t break their knowledge down below the level at which iteration was a primitive. As another example, in representing a VLSI design heuristic H as a two-page Lisp program, enormous structure and detail were added -- details that are meaningless as far as capturing its meaning as a piece of VLSI knowledge (e.g., lots of named local variables being bound and updated; many operations which were conceptually an indivisible primitive part of H were coded as several lines of Lisp which contained dozens of distinguishable (and hence mutable) function calls: etc.) Those details were meaningful (and necessary) to H’s implementation on a particular architecture. Of course, ne can never directly mutate the meaning of a concept, we can only mutate the structural for-t?? of that concept as embedded in some representation scheme. Thus, there is never any guarantee that we aren’t just mutating some “implementation detail” that is a consequence of the representation, rather than some genuine part of the concept’s intensionality. But there are even more serious representations issues. In terms of the syntax of a given language, it is straightforward to define a collection of mutators that produce minimal generalizations of a given Lisp function by systematic modifications to its implementation structure (e.g., removing a conjunct, replacing AXD by OR, finding a NOT and specializing its argument, etc.) Structural generalizations produced in this \\ay can be guaranteed to generalize the extension of function, and that necessarih. produces a generalization of its intension, its meaning. -i‘herein lies the lure of the AM and Eurisko paradigm. \Ve noif understand that that lure conceals a dangerous barb: minimal generalizations defined over a function’s structural encoding need not bear much relationship to minimal intensional generalizations, especially if these functions are computational objects as opposed to mathematical entities. 237 Better Representations Since 1976, one of us has attempted to get EURISKO (the descendant of AM; see [Lenat 82,83a,b]) to learn new heuristics the same way it learns new math concepts. For five years, that effort achieved mediocre results. Gradually, the way we represented heuristics changed, from two opaque lumps of Lisp code (a one- page long IF part and a one-page long THEN part) into a new language in which the statement of heuristics is more natural: it appears more spread out (dozens of slots replacing the IF and THEN), but the length of the values in each IF and THEN is quite small, and the total size of all those values put together is still much smaller (often an order of magnitude) than the original two-page lumps were. It is not merely the shortening of the code that is important here, but rather the fact that this new vocabulary of slots provides a functional decomposition of the original two-page program. A single mutation in the n& representation now “macro expands” into many coordinnted small mutations at the Lisp code level: conversely. most \:leaningless small changes at the Lisp level can’t e\‘en be expressed in terms of changes to the higher-order language. This is akin to the uay biological evolution makes use of the gejle as a meaningful functional unit, and gets great milage from rearranging and copy-and-edit’ing it. A heuristic in EURISKO is now -- like a math concept always was in AM -- a collection of about twenty or more slots, each filled with a line or two worth of code (or often just an atom or two). By employing this new language, the old property that A-M satisfied fortuitously is once again satisfied: the primitive syntactic mutation operators usually now produce meaningful semantic variants of what they operate on. Partly by design and partly by evolution, a language has been constructed in which heuristics are represented naturally, just as Church and McCarthy made the lambda calculus and Lisp a language in which math characteristic functions could be represented naturally. Just as the Lisp<-->Math “match” helped AM to work, to discover math concepts, the new “match” helps Eurisko to discover heuristics. In getting Eurisko to work in domains other than mathematics, we have also been forced to develop a rich set of slots for each domain (so that any one value for a slot of a concept will be small) and provide a frame that contains information about that slot (so it can be used meaningfully by the program). This combination of small size, meaningful functional decomposition, plus explicitly stored information about each type of slot, enables the AM-Eurisko scheme to function adequately. It has already done so for domains such as the design of three dimensional VLSI chips, the desi.gn of fleets for a futuristic nai al wargame, and for lnterhsp programming. We believe that such a natural representation should be sought b> anypne building an expert system for domain X: if M,hat IS bei?g built is intended to form new theories about X, then it IS a necessity, not a luxury. That is, it is necessary to find a way of representing X’s concepts as a structure whose pieces are each relatively small and unstructured. In many cases, an existing representation will suffice, but if the “leaves” are large, simple methods will not suffice to transform and combine them into new, meaningful “leaves”. This is the primary retrospective lesson ue ha\.e gleaned from our study of AM. We have applied it to getting Eurisko to discover heuristics. and are beginning to get Eurisko to discover such new languages, to automatically modify its vocabulary of slots. To date. there are three cases in which Eurisko has successfully and fruitfully split a slot into more specialized subslots. One of those cases was in the domain of designing three dimensional VLSI circuits, where the Terminals slot was automatically split into InputTerminals, OutputTerminals, and SetsOfWhich- ExactlyOneElementMustBeAnOutputTerminal. The central argument here is the following: (1) “Theories” deal with the meaning, the content of a body of concepts, whereas “theory formation” is of necessity limited to working on form, on the structures that represent those concepts in some scheme. (2) This makes the mapping between form and content quite important to the success of a theory formation effort (be it by humans or machines). (3) Thus it’s important to find a representation in which the form<-->content mapping is as natural (i.e., efficient) as possible, a representation that mimics (analogicall\) the conceptual underpinnings of the task domain b&g theorized about. This is akin to Brian Smith’s recognition of the desire to achisle a categorical alignment betljeen the syntax and semantics of a computational language. (4) Exploring “theorb formation” therefore frames -- and forces us to study -- the mapping between form and content. (5) This is especially true for those of us in AI who wish to build theory formation programs, because that mapping is vital to the ultimate successful performance of our programs. Where does the meaning reside? We speak of our progr‘ams knowing something, e.g. ANs knowing about the List-Equal concept. But in what sense does A-V know it? Although this question may seem a bit adolescent, we believe that in the realm of theory formation (and learning s!,srems), answers to this question are crucial, for otherwise what does it mean to say that the system has “discovered” a new concept? In fact, many of the controversies over A;M stem from confusions about this one issue -- admittedly, confusions in our own understanding of this issue as well as others’. and In AM and Eurisko, a concept C is simuly;eously somewhat redundantly represented two fundamentally different ways. The first way is via its characteristic function (as stored on the Algorithms and Domain/Range slots of the frame for C). This provides a meani?g relative to the WOJ it is interpreted, but since there 1s a single unchanging EVAL, this provides a unique interpretation of C. The second way a concept is specified is more declaratilel!.. \ ia slots that contain corzstraiuts on the meaning: Generalizations, Examples, ISA. For instance, if b’e specify that D is a Generalization of C (i.e., D is an entr1 on C’s Generalizations slot). then by the semantics of “Generalizations” all entries on C’s Examples slot oucht to cause D’s Algorithm to return T. Such constrai& squeeze the set of possible meanings of C but rarely to a single point. That is. multiple interpretations based just on these underdetermined constraints are still possible. Sotice that each scheme has its ow’n unique advantage. The characteristic function provides a complete and 238 succinct characterization that can both be executed efficiently and operated 011. The descriptive information about the concept, although not providing a “characterization” instead provides the grist to guide control of the mutators, as well as jogging the imagination of human users of the program by forcing them to do the disambiguation themselves! Both of these uses capitalize on the ambiguities. We will return to this point in a moment but first let us consider how meaning resides in the characteristic function of a concept. It is beyond the scope of this paper to detail how meaning per se resides in a procedural encoding of a characteristic function. But two comments are in order. First, it is obvious that the meaning of a characteristic function is always relative to the interpreter (theory) for the given language in which the function is. In this case, the interpreter can be succintly7 specified by the EVAL of the given Lisp system. But the meaning also resides, in part, in the “meaning” of the data structures (i.e. what they are meant to denote in the “world”) that act as arguments to that algorithm. For example. the math concept List- Equal takes as its arguments two lists. That concept is represented by a LISP predicate, n,hich takes as rts two arguments two structures that both are lists and (trivially) represent lists. That predicate (the LAhlBDA expression niven earlier for List-Equal) assumes that its arguments till never need “dots” to represent them (i.e., that at all levels the CDR of any subexpression is either NIL or nonatomic), it assumes that there is no circular list structure in the arguments, etc. This representation, too, proved well-suited for leading quickly to a definition of natural numbers (just by doing a substitution of T for an\.thing in a Lisp list), and that unary representation was critical to AM’s discovering arithmetic and elementary number theory. If somehow a place-value scheme for representing numbers had developed, then the simple route AM followed to discover arithmetic (simply applying Set-theoretic functions to “numbers” and seeing what happened) would not have worked at all. It’s fine to ask what happens when you apply BagUnion to three and two, so long as they’re represented as (T T T) and (T Tj: the result is (T T T T T). i.e. the number five in our unary representation. Try applying BagUnion to 3 and > (or to any two Lisp atoms) and you’d get NIL, which IS no help at all. Using bags of T’s for numbers is ta into the same source of power as Gelernter 11963 P ping did; namely, the power of having an analogic representation, one in which there is a closeness between the data structures employed and the abstract concept it represents -- again, an issue of the relationship between form and function. Thus, to some extent, even when discussing the meaning of a concept as portrayed in its characteristic function, there is some aspect of that meaning that we must attribute to it. namely that aspect that has to do with how w’e wish to interpret the data structures it operates on. That is, although the system in principle contains a complete characterizatton of what the operators of the language mean (the system has embedded within itself a representation of EVAL -- a representation that is, in principle, modifiable by the system itself) the system nevertheless contains no theory as to what the data structures denote. Rather, ,ve (the human observers) attribute meaning to those structures. AM (and any AI program) is merely a model, and by watching it we place a particular interpretation on that model, though many alternatives may exist. The representation of a concept by a Lisp encoding of its characteristic function may very well admit only one interpretation (given a fixed EVAL, a fixed set of data structures for arguments, etc.) But most human observers looked not at that function but rather at the underconstrained declarative information stored on slots with names like Domain/Range, HowCreated, Generalizations, ISA, Examples, and so on. We find it provocative that the most useful heuristics in Eurisko -- the ones which provide the best control guidance -- have triggering conditions which are also based only on these same underconstraining slots. Going over the history of AM, we realize that in a more fundamental way we -- the human observers -- play another crucial role in attributing “meaning” to a discovery in AM. How is that? As is clear from the fact that Eurisko has often sparked insights and discoveries, the clearest sense of meaning may be said to reside in the way its output jogs our (or other observers’) memory, the way it forces us to attribute some meaning to what it claims is a disco\,eryv. Two examples, drawn from Donald Knuth’s experiences in looking o\er traces of AIM’s behavior, will illustrate the two kinds of “filling in” that is done b! human beings: (i) See AM s definition of highly composite numbers, plus its claim that they. are interesting, and (for a very different reason than the program) notice that they are interesting; (ii) See a definition of partitioning sets (an operation that was never judged to be interesting by AM after it defined and studied it), recognize that it is the definition of a familiar, worthwhile concept, and credit the program with rediscovering it. While most of A-M’s discoveries were judged interesting or not interesting in accord with human judgements, and for similar reasons, errors of these two types did occur occasionally, and indeed errors of the first type have proven to be a major source of synergy in using Eurisko. To put this cynicall!. the more a working scientist bears his control knoivlsdge (audit trail) to his colleagues and students, the more accurately they can interpret the meaning of his statements and discoveries, but the less like/j, they w?ll be to come up (via being forced to work to find an interpretation) with different, and perhaps more interesting, interpretations. Conclusion We halve taken a retrospective look at the kind of activity AaM carried out. Although we generally described it as “exploring in the space of math concepts”, what it actually was doing from moment to moment was “syntactically mutating small Lisp programs”. Rather than disdaining it for that reason, we saw that that was its salvation, its chief source of pow’er, the reason it had such a high hit rate: AM was exploiting the natural tie between Lisp and mathematics. We ha\,e seen the dependence of AM’s performance upon its representation of math concepts’ characterisitic functions in Lisp, and in turn their dependence upon the Lisp representation of their arguments, and in both cases their dependence upon the semantics of Lisp, and in all 239 those &es the depkndence upon the obser\ler’s frame of reference. The largely fortuitous “impedance match” between all four of these, in AM, enabled it to proceed with great speed for a while, until it moved into a less well balanced state. One of the most crucial requirements for a learning system, especially one that is to learn by discoverv, is that of an adequate representation. The paradigm for machine learning to date has been limited to learning new expressions in some more or less well defined language (even though, as in AM’s case, the vocabulary may increase over time, and, as in Eurisko’s case, even the grammar might expand occasionally). well If the language or representation employed is not matched to the domain objects and operators, the heuristics that do exist will be long and awkwardly stated, and the discovery of new ones in that representation may be nearly impossible. As an example, consider that Eurisko began with a small vocabulary of slots for describing heuristics (If, Then), and over the last several years it has been necessary (in order to obtain reasonable performance) to evolve two orders of magnitude more kinds of slots that heuristics could have, many of them domain-dependent, many. of them proposed by Eurisko itself. Another example 1s simply the amount of effort we must expend to add a new domain to Eurisko’s repertoire, much of that effort involving choosing and adjusting a set of new domain-specific slots. The chief bottleneck in building large AI programs, such as expert systems, is recognized as being knowled e acquisition. There are two major problems to tackle: $ i) building tools to facilitate the man-machine interface, and (ii) finding ways to dynamically devise an appropriate representation. Much work has focused on the former of these, but our experience with AM and Eurisko indicates that the latter is just as serious a contributor to the bottleneck, especially in building theory formation systems. Thus, our current research is to get Eurisko to automatically extend its vocabulary of slots, to maintain the naturalness of its representation as new (sub)domains are uncovered and explored. This paper has raised the aiarm; another longer one [Lenat 83b] discusses the approach we’re following and progress to date. Acknowledgements EURISKO is written in -- and relies upon -- RLL [Lenat&Greiner 801 and Interlisp-D. We wish to thank XEROX PARC’s CIS and Stanford Unii.ersity’s HPP for providing superb environments (intellectual, physical, and computational) in which to work. Financial support is provided by ONR, ARPA, and XEROX. We thank Saul Amarel and Danny Bobrow for useful comments on this work. References DeKleer, J., and J. S. Brown “Foundations of Envisioning”, Proc. AAAI-82, NCAI, Carnegie-Mellon U., Pittsburgh, Pa., 1982. Feigenbaum, Edward A., “The Art of Artificial Intellig.ence”, Proc. Fijih IJCAI, August, 1977, MIT, Cambndge, Mass., p. 1014. Glerenter, H., “Realization of Geometry Theorem Proving Machine”, in (Feigenbaum and Feldman, eds.) Computers and Thought, McGraw-Hill, N.Y., 1963, 134- 52. Green, Cord&, Richard Waldinger, David Barstow, Robert Elschlager, Douglas Lenat, Brian VcCune, David Shaw, and Louis Steinberg, Progress Report on Program Understanding SJlstems, AIM-240, STAN-CS-74-444, AI Lab, Stanford, Ca., August, 1974. Hayes-Roth, Frederick, Donald Waterman, and Douglas Lenat (eds.), Building Expert Systems, Addison- Wesley, 1983. Lenat, Douglas B., “On Automated Scientific Theory Formation: A Case Study Using the AM Program,” in (Jean Hayes, Donald Michie, and L. I. Mikulich, eds.) Machine Intelligence 9, I\;ew York: Halstead Press, a division of John Wiley & Sons, 1979, pp. 251-283. Lenat, Douglas B., and Russel D. Greiner, “RLL: A Representation Language Language,” Proc. of the First Annual IVCAI, Stanford, August, 1980. Lenat, Douglas B., “The Nature of Heuristics”, J. Artl$icial Intelligence, 19, 2, October, 1982. Lenat, Douglas B., “The Nature of Heuristics II”, J. Art$cial Intelligence 21, March, 1983a. Lenat, Douglas B., “The Nature of Heuristics III”, J. ArtiJicial Intelligence 21, March, 1983b. Polya, G., Press, 1945. How to Solve It, Princeton University Smith, Brian, “Reflection and Semantics in a Procedural LaFguage”, M.I.T. Laboratory for Computer Science Technical Report TR-272, Cambridge, Ma., 1982
1983
59
254
USING STRUCTURAL AND FUNCTIONAL INFORMATION IN DIAGNOSTIC DESIGN Walter Hamscher The Artificial Intelligence Laboratory Massachusetts Institute of Technology Abstract We wish to design a diagnostic for a device from knowledge of its structure and function. The diagnostic should achieve both coverage of the faults that can occur in the device, and should strive to achieve specificify in its diagnosis when it detects a fault. A system is described that uses a simple model of hardware structure and function, representing the device in terms of its internal primitive functions and connections. The system designs a diagnostic in three steps. First, an extension of path sensitization is used to design a test for each of the connections in the device. Next, the resulting tests are improved by increasing their specificity. Finally the tests are ordered so that each relies on the fewest possible connections. We describe an implementation of the first of these steps and show an example of the results for a simple device. Introduction Figure 1 -- 4x2 Multiplexer This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of rechnology. Support for the Laboratory’s artificial intelligence research on hardware troubleshooting is provided in part by the Digital Equipment Corporation. He can plan a diagnostic for this device by knowing only that the address lines select one of the data inputs and routes its data to the output. That much knowledge tells him that he should not test the data inputs until verifying the addressing lines, and that he can test the output independently of any single input by iterating over several values of the address. He would organize the diagnostic into phases: (1) Test whether the output of the multiplexer can transmit data correctly. (2) Test each whether each data input can be addressed. (3) Test whether data can be correctly transmitted by each data input. The plan shows attention to both coverage and resolution. It achieves coverage of faults by testing the address, data inputs, and data output. It achieves resolution by testing one function at a time and by having each test rely only on functions that have been previously tested. This is the competence that our system tries to capture. To accomplish this it relies on a simple model of hardware structure and function to represent the device and uses some general assumptions and principles of diagnostic design. With this foundation it is able to design a series of tests with coverage under the given fault model, and achieves resolution with tests that are specific, robust, and ordered so as to rely on previously tested components. We describe this simple model of hardware along with a general fault model. The information path model is intended to capture the ability of humans to plan diagnostics without knowing very much about the hardware implementation. The representation should be adequate to let us determine what tests need to be run and what dependencies exist among those tests. Using this model we develop a vocabulary of diagnosis that rests on the notion that every test has a set of conditions which should be minimized in order to achieve resolution. We then describe a system that uses these principles. The system designs a diagnostic in three steps. First, it designs many small tests, one to detect each fault that might occur. Second, it tries to improve these small tests so that each relies on fewer parts. Third, it aggregates the tests and orders them. We describe a program that implements the first phase in this system. The program treats inquiry design as a search problem in the space of possible inquiries for a given device. We describe some of the knowledge that the program uses to reduce the size of the search space. From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Previous Work in Test Generation Gate Until recently, most efforts in automated test generation have focused on gate-level representations of combinatorial circuits, and have concentrated on achieving coverage of faults rather than resolution. The methodology typically employed is path sensifization. Path sensitization relies on two basic concepts: a fault must be sensitized and the result must be propagated to an output. A fault is sensitized when the effect of the fault is visible. In the digital domain, if a signal is stuck at zero (sa-0) and we try to force it to 1, then the fault is sensitized. A result is propagated to an output of a device by choosing inputs of the device so that the presence of the fault can be determined by looking at the output. The best-known algorithm for path sensitization is the D-algorithm ([l] and [2]). Experience with path sensitization indicates that (a) it is most successful when gate-level descriptions are used, although this is computationally expensive; (b) when more abstract functional descriptions are used, as in [3], a lack of correspondence between those functional descriptions and their hardware implementations has a negative effect on both the coverage and resolution of the resulting test sequences. Achieving coverage and resolution depends on choosing an appropriate level of abstraction and viewing the diagnostic as a collection of primitive tests that can be ordered in such a way as to increase their resolution. The key points in our selection of a level of abstraction are the notions that information paths connect functional devices, and that information flows between these devices along the paths. We use this model to abstract away from the digital irnplementation details as much as possible, and yet retain the ability to map the designed test sequence back onto the real device when the time comes. Figure 3 -- Multiplexer An explanation of how the 4x2 multiplexer works shows how it can be described in terms of a functional devices: “the address lines select a data input, that data input goes to the output; the unselected inputs have no effect on the output.” To represent these functions, we have chosen three primitive functional devices: the Gate, the Junction, and the Selector.* These are shown in figure 2. The paths that connect these functional primitives transmit sets of values. Examples of value Sets are the two-element sets {h i ,l o}, and the set D = {dO,dl...dn-I}, where n = 2rk, and k is the width of the path in bits. Paths are annotated with several forms of information. The most important is design information about the intended interaction of the devices. These intentions are represented by matching path input values to path output values. For example, the multiplexe: is designed so that when a gate’s control is 1 o its data input will not affect the multiplexer output. We use the value X-g to represent the value transmitted by the output of a gate when its control is 1 o. and the value X-j on an input of a junction to represent that the output is insensitive to that input. The design inforrnation is that X-g and X-j are equivalent. Henceforth we simply use the value X to represent that value. The Gate has a control input with the values nl ana I o. When the control input is hi, the data input is transmitted to its output. When the control input is 1 o, the output is insensitive to the data input. Our fault model refers only to path behavior: a fault is always a fault in the transmission of values from one end of a path to the other. Restricting faults to appear only on paths maintains a useful level of generality that encompasses a wide range of physical faults, including stuck-at and bridging faults. Our model assumes nonintermittency and unidirectional information ftow on paths3 Faults in devices are not yet The Junction has several inputs and a single output; it considered; the addition of such failIts will increase the size of merges several information paths. the problem. but we anticipate that it will not significantly A Selector has an address input that determines which of change the design process. At this stage we feel that the path its outputs will have the value hi, while the other outputs will fault approximation is good enough and general enough that it have the value 1 o. is still useful. The multiplexer that we build from these primitives is shown in figure 3. 2. This set of primitives is a preliminary guess at the set of primitives needed. No claim to completeness is intended, and it is expected that the set will need to be expanded. We can now interpret the coverage and resolution criteria described earlier in the light of the intormation path model. A fundamental concept is that of the inquiry. An inquiry is a 3. These are common assumptions to sometimes vlolaled in the real world [4]. make when doing dtagnosls, although Dataln DataOut Selector Junction Figure 2 -- Primitives question of the form, “Does path X transmit the value Y correctly?“ The pair (X,Y) is the foclfs of the inquiry. Each inquiry relies on some subset of paths within the device. Its reliance on those paths is expressed as a set of conditions on those paths. if the inquiry fails, we can conclude that one of the conditions was violated, but we won’t know which one. Thus, the larger the set of conditions, the less specific the inquiry. Each inquiry has a single focus, and consists of a set of input values to the device, a comparison to be performed on an output, and some conditions. One of the conditions that the inquiry is obviously testing-- because it was originally postulated that way-- is that the focus is OK. Other conditions that an inquiry might include are that all the irnmediate predecessor paths must be OK. For example, an inquiry about whether the output of a gate can transmit the value d2 consists of the input value r!2 on the data input and h i on the control input, a test to see whcthcr the output is d2, and the conditions that the data inpM, control input, and output must all be OK. This simple inquiry is shown below. Inquiry I-l : [ Out ? d2 ] Values: (DataIn = d2) (Ctl = hi) (Out ‘i‘ 42) Conditions: (ok DataIn) (ok Ctl) (ok OH!.) When an inquiry gets a bad result, we sa:; that the inquiry implicates the paths mentioned in the conditicns. k fest is series of inquiries, one for every value of a particular path.4 If none of the inquiries implicates, then we conclude that the focus path transmitted all its values correctl;/. In this case we say that the test exonerates the path. A diagnostic may be viewed as an attempt ia exonerate all the parts in the device under test. Hence a c>r?,gno;iic consists of an inquiry for every value on every path to see that the path can transmit the value faithfully; this is hc:;r coverage is achieved. To provide resolution, each inquiry shou!d implicate as few paths as possible when it fails. Overview of the Diagnostic Generation Pxxedure Designing a diagnostic is done irl three s::c?ps. First we create inquiries for every value on every path in the device. From this step we get a set of inquiries, cash with its own set of conditions. Second, we analyze and combine the inquiries to reduce their conditions. This is done using the single point of failure assumption, hereafter SPFA. In our case, the SPFA is an assumption that only a single path is faulty. Third, we collect the reduced inquiries into tests and order ihe tests in such a way as to take advantage of prior test results. The resulting ordered inquiries can then be transMed into the actual test patterns using implementation information. During the Inquiry Design Phase we use an approach similar to path sensitization, but apply it to our information-path model of the device. There are currently 39 rules that propagate values and conditions throughout a device. This phase will be treated in more depth momentarily. 4. 7 here are 2tk inquiries per path in the restrictive fault model there are only O(k). the general case, but under a more The Inquiry Improvement Phase transforms each inquiry to reduce its conditions. The SPFA can be used to do this in several ways. One of the techniques used to reduce conditions is collaboration: two inquiries about the same focus can be combined into a compound inquiry having a reduced set of conditions. We can do this under the SPFA because only conditions that appear in both could be responsible for both inquiries failing. For example, if we have two inquiries for A, one with the conditions (ok A) and (ok B), the other with the conditions (ok A) and (ok C ), we can make another inquiry with only the condition ( ok A). This phase also collects all the inquiries sharing a focus path to create a test for that path. For example, the inquiry that asks whether a gate control input can transmit hi and the inquiry that asks whether it can transmit 1 o comprise a test for that path. A test consists of the set of inquiries and a set of conditions that is the union of the inquiries’ conditions; this union represents all the paths on which the test relies. The Test Ordering Phase further improves resolution by ordering the tests so that each has the minimal set of conditions. Tests’ conditions can be reduced by ordering because any paths that have already been tested need not appear in later tests’ conditions. Ordering of tests in this phase is done pairwise, making use of the principle that “tests that could implicate fewer paths should be tested first.” For example, if test T-l has the conditions (ok A) and (ok B) and test T-2 has the condition (ok A), test T-2 should be done first. The Inquiry Design Phase Recall that path sensitization works by sensitizing a fault and propagating the result. Henceforth, we will say that backward propagation of values sensitizes a fault, and forward propagation makes a result visible. To design inquiries, we propagate path values and path conditions. The local propagations are described by rules. There are four kinds of rules, capturing four different kinds of knowledge. Behavior rules describe the input/output behavior of devices. Sensitization rules assign values to the inputs of a device in two situations: (a) when an output of the device must be forced to some value; (b) when an output of the device must be made sensitive to one input. Goal rules guide the direction that the sensitization rules propagate. Condifion rules add paths to the conditions of the inquiry wherever a fault might cause the same effect as violating an existing condition. The rules are associated with devices and propagate across single devices. To design an inquiry for a given focus path and value, we assign the path to have the goal of being sensitized and its result propagated, its value to be the focus value, and its condition to be OK. The rules then propagate goals, values, and conditions outward to the edges of the device. For some rules choices are available, and we iterate through these. If at any point we reach a contradictory assignment of values, we conclude that the choices we made were incompatible and that we should go on to the next alternative. Goal rules tell which direction the sensitization rules will propagate, but do not assign values to the paths. Each path value must be either accomplished, meaning that backward propagation must occur from it, or observed, meaning that forward propagation must occur. Backward propagation is guided by rules that can be expressed as, “if we wish to accomplish any value on the output, we need to accomplish some values on the inputs.“ Forward propagation is guided by rules that are expressed as, “if we wish to observe some input of a device, then we need to accomplish some values on its other inputs and observe an output. Sensitization rules assign values to inputs of a device in both forward and backward propagations. GS-1, shown in figure 4, is an example of a forward propagation. To observe any value on the data input of a Gate, we must assign hi to the control input, because if we assigned a lo, the output would always be insensitive to the data input. GS-3, shown in figure 5, is an example of backward propagation. If we want to accomplish some value di, we need to accomplish that same value on the data input, and accomplish hi on the control input. obs Figure 4 -- Sensitization Rule GS- 1 Figure 5 -- Sensitization Rule GS-2 Behavior rules describe the behavior of the device by propagating values. For example, a behavior rule for the gate is that when the control input is hi, the output gets assigned the value of the data input. Condition rules propagate conditions after the goals and values have been assigned. GC-1, in figure 6, is an example of a backward condition propagation. To test the data output of a gate to see whether it transmits any value di that is not X, the control input would be hi and data input di. Since faults on either the control or data inputs would violate OK on !he output, OK propagates to both the input paths. Condition rules add to the conditions of the inquiry wherever a fault might cause the same effect as violating an already existing condition. act hi Figure 6 -- Condition Rule GC-1 Inquiries are designed using these rules. Consider designing an inquiry to see whether the output of gate Gl of the multiplexer transmits the value d2. We start by assigning to the path the goals act (accomplish) and obs (observe), the value d2, and the condition OK. Rules fire to propagate goals, conditions, and values throughout the device; the final assignments are shown in figure 7.5 The resulting inquiry is shown below. Inquiry I-25 : [ DO-l ? d2 ] Values: (A = dl) (DI-1 = d2) (DI-2 = X) (DI-3 = X) (DO ? d2) Conditions: (ok A) (ok E-l) (ok DI-1) (ok DO-l) (ok DO-2) (ok DO-3) (ok DO) (or (ok DI-2) (ok E-2)) (or (ok DI-3) (ok E-3)) This inquiry means: To test whether DO-1 transmits d2, assign A to be d I, Dl-1 to be d2, and let the other Dl’s be X. The test will be whether DO has the value d2. If the test succeeds, conclude that DO-l can transmit d2. If the test fails, conclude that one of the paths A, E-l, DI-1, DO-l, DO-2, DO-3, or DO was bad, that E-2 and DI-2 were bad, or that E-3 and DI-3 were bad. act act + obs ok ok E-3 Figure 7 -- Inquiry for DO-1 155 Implementation of the Inquiry Design Phase An implementation of the inquiry design phase has been written in Franz Lisp on a VAX-l l/780 running Unix. Devices built from the primitives of the information path model are represented in a language described in [4]. The rules described above are used to derive the consequences of goal, value, and condition assignments to paths. Because some rules require choices to be made, the program designs all the inquiries for a focus using exhaustive search. This search can be limited by taking advantage of nonexclusive sets of choices, and by making choices that result in locally minimal conditions. Because of the small size of the multiplexer problem, all the search trees are of depth 1, there is never more than a single choice point active at one time, and the greatest number of inquiries for a single focus is three.6 In general the size of the search tree has an upper bound of Of(n*m)?hl, where h is the length of the longest sequence of paths from an input to an output, n is the number devices in each stage, and m is the number of possible choices for each device. When the choices to be made are exclusive, the program iterates through the possible assignments. This results in a depth first search. In cases where the choices are not exclusive, the program avoids the iteration-- and thereby some search-- by using a more efficient mechanism that initially chooses a// the alternatives at the choice point. Later assignments may then simply rule out some of those alternatives without requiring backing up to the choice point. Search may also be avoided by making choices that are likely to yield better inquiries. Since we prefer inquiries with fewer conditions, we may search the tree in such a way that only those assignments are tried that keep local conditions to a minimum. Future Directions All the phases of the diagnostic design procedure have been implemented and tested on a number of examples. But there much more to the problem of designing diagnostics, as suggested both by the limitations of the inquiry design methodology and by the multiplexer problem described earlier. There are several irnportant limitations to the methodology of our current system. The rules can only propagate specific values, when at times sets of values would be more appropriate. The system also needs nonlocal information about relationships between values on related paths. In a junction, for example, we may need to know that to obtain a di at the output, exactly one of the inputs must be di while the other inputs have X. Unfortunately such assertions about the behavior of the junction cannot be represented by local assignments to individual paths. One answer to the latter problem is to redefine what things are local; related paths can be grouped as a collection of paths. Now any rules that propagate assertions about collections of paths are in fact local. If the device is described in a structural 5. Around gates G2 and G3, the conditions on the control and data inputs have been or’d by a rule that checks whether X and 1 o are present and if so “or’s” the conditions. 6. The outpui path can be tested with the address input assigned dl, d2, or d3. hierarchy, we might use this hierarchy as the basis of these collections; unfortunately these might not be the appropriate groupings for solving the problem. The test designer should derive the same conclusions as if it had an appropriate hierarchy available, thereby “discovering” the appropriate global information. Building the global information into the structure description seems like the wrong approach. However, more important than these shortcomings in the propagation machinery, the system must also be broadened. First, it is clear that the vocabulary of devices is extremely limited. It must be extended to include computational devices such as adders and shifters, as well as devices with state. Second, while path faults is a good place to start on the problem, the possibility of faults in devices must clearly be considered. We anticipate that we will deal with this by the standard approach of using hierarchic descriptions, and by a less traditional approach involving the use of a gradation of condition strengths that will allow us to express minimal device functionalities. Acknowledgements For many helpful discussions, thanks go to all the members of the Hardware Troubleshooting group at the MIT Al Lab, especially Randy Davis, Howard Shrobe, Mark Shirley, Harold Haig, Steve Polit, and Dan Carnese. References [l] Breuer M, Friedman A, Diagnosis and Reliable Design of Digital Systems, Computer Science Press, 1976. [2] Roth, John Paul, Computer Logic, Testing, and Verification, Computer Science Press, 1980, Chapter 3. [3] Lai, Kwok-Woon, Functional Testing of Digital Systems, PhD Thesis, Carnegie-Mellon University, Department of Computer Science, December 1981, Technical Report CMU-CS-81-148. [4] Davis R, Shrobe H, Hamscher W, Wieckert K, Shirley M, Polit S, Diagnosis Based on Description of Structure and Function, Proc AAAI-82, pp. 137-142. [5] Hamscher W, R Davis, Using Structural and Functional Information in Diagnostic Design, MIT Al Memo 707, June 1983.
1983
6
255
A Problem-Solver for Ma ing Advice Operational Jack Mostow USC lnformatlon Sciences Institute 4676 Admiralty Way. Marina del Rey, CA. 90291’ Abstract’ One problem with taking advice arises when the advice is expressed in terms of data or actions unavailable to the advice- taker. For example. in the card game Hearts, the advice “don’t lead a suit in which some opponent has no cards left” is non- operational because players cannot see their opponents’ cards. Operationalization is the process of converting such advice into an executable (perhaps heuristic) procedure. This paper describes an interactive system, called BAR. that operationalizes advice by applying a series of program transformations. By awMw different transformation sequences, BAR can operationalize the same advice in very different ways. BAR uses means-ends analysis and planning in an abstraction space. Rather than using a hand-coded difference table, BAR analyzes the transformations to identify transformation sequences that might help solve a given problem. Thus new transformations can be added without modifying the problem-solver itself. Although user intervention is required to select among alternative plans, BAR reduces the number of alternatives by lo3 compared to an earlier operationalizer. 1. Int reduction Many tasks that are onerous to program seem much easier to specify in terms of a body of advice. Examples include air traffic control (“keep planes three miles apart”), factory scheduling (“minimize re-tooling”), document preparation (“place a figure on the page that mentions it”)* and computer dating (*.no incestuous matches”). The idea of the advice-taking machine has been around for some time [McCarthy 681: in this paradigm, the machme accepts advice for how to perform a task and converts it into an effective procedure. [Hayes-Roth 811 explores this paradigm in some depth. showing how advice provided by an expert tutor could be refined by experience. Several hard problems must be solved to achieve the ambitrous long-term goal of a general advice-taker. One problem is to translate advice expressed imprecisely in natural language into a precise machine representation of its meaning. Another problem ‘This research was supported by DARPA Contract MDA.903.81-C-0335. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies. either expressed or implied, of the Defense Advanced Research Projects Agency or the US Government. I am grateful to my dissertation committee (Rick Hayes-Roth. Allen Newell, Jaime Carbonell. and Bob Balzer) for manifold contributions to this research. to my ISI colleagues for their intellectual influence, and to Don Cohen and Bill Swartout for improvmg this paper. is to combine different, possibly conflicting pieces of advice. This paper focusses on a third problem: taking advice that is non-operational, i.e.. expressed In terms of data or actlons unavailable to the machine, and transforming it into a procedure executable using only the available operations This process is called operational/zat/on [Mostow 811 [Diettench et al 821. Operationalization cannot be studied in a vacuum: one must look at how advice IS operationalrzed in the context of a particular task. ideally one that is easy to model but retains the essential properties of advice-based tasks. i.e., that no simple. economlcal algorithm is known (this rules out tasks like sorting). but good performance can be attained by following known advice (this rules out tasks like earthquake prediction). A task like air traffic control has both properties but is inconvenient to model. This paper uses the card game Hearts as its example domain, and is based on two programs, called FOO and BAR. that accept Hearts advice, encoded in a suitable internal representation. and operatlonalize it by applying a series of program transformations. FOO [Mostow 811 was used to operationalize 13 pieces of advice for Hearts and a music composition task, including*: - “Avoid taking a trick with points” - non-operational because a Hearts player cannot simply refuse to take points - “Flush out the Queen of spades” - one player cannot choose the card played by another - “Don’t lead a high card in a suit where some opponent is void [has no cards left]” - a player may not peek at opponents’ cards By applying different sequences of transformations. the same piece of advice was operatronalized in different ways. Thus “avoid taking points” was operationalized both as “play a low card” and as a heuristic search procedure that enumerates possible sequences of play for a trick to determine whether playing a given card mrght lead to taking points [Mostow 821. “Flush the Queen” was operationallzed as a plan to keep leading spades until whoever has the Queen IS forced to play it. The problem of deciding whether some oppgnent is void in a given suit was operationalized In two ways. One way is to check if someone failed to follow suit earlier when that suit was led Another is to 2 These preces of advice were operatlonallzed independently of each other, but in fact they interact For example, the reason for flushing out the Queen of spades IS to make someone else take It. Leadmg the Kmg or Ace of spades might help flush out the Queen at the cost of takmg it. which would vlolate the advice to avoid taking points. 279 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. estimate the probability that someone is void based on the number of cards played in the suit. l-he examples handled by FOO averaged 60 steps in length. At each step. the decision of which transformation rule to apply and which part of the advice to apply it to were made by hand. Since FOO had 230 general transformation rules. the branching factor was on the order of lo3 to 104. FOO’S successor. BAR, helps automate the selection of rule sequences, reducing the branching factor to between 1 and 10 at each step. This lo3 improvement is achieved by a combination of means-ends analysis and abstract planning. BAR currently has 60 rules and handles about half of the 13 examples done using FOO. 2. A means-ends analysis approach Both FOO and BAR incorporate a problem space [Newell 791 model of operationalization, whose states are expressions in a LISP-like language. For example, the expression (Void p, so) represents the condition “player p, is void in suit so.” The operators in this space are general transformation rules. BAR formalizes the goal structure left implicit in the transformation sequences generated using FOO. Goals in the operationalization space (as opposed to goals in the task domain itself) are expressed in a pattern language containing pattern variables. embedded tests, segments, Kleene star, and some other constructs. A problem in this space consists of finding a sequence of transformation rules that rewrites a given expression to match a given pattern. This is accomplished by means-ends analysis, succinctly described in [Kant & Newell 821 as the continual comparison of the current state with the desired state (or its description); the result of the comparison (a difference or an opportunity) is used to select the next operator (to reduce the difference or exploit the opportunity). Differences between an expression and a pattern include mismatches between corresponding symbols. argument transposition. and so forth: for a detailed description of BAR’S pattern language and the kinds of differences. see [Mostow 831. Two kinds of top-level operationalization problems are represented in the pattern language by the tests @lsEvaluabie and @lsAchievable. The first represents the goal of figuring out how to evaluate a given expression in terms of observable data. The second represents the goal of finding executable actions to achieve a given condition. Subgoals arise when a selected transformation rule does not apply to the current expression; such a subgoal is represented by a pattern language description of the rule s left-hand side.3 Many means-ends problem-solvers have been built since GPS [Newell 601: the next two sections describe the novel aspects Of BAR. 3 BAR’S rules are encoded as procedures, but BAR reasons about them using a more convenient pattern language representation. The pattern language is powerful enough to capture most if not all of each rule. 3. Separation of knowledge In BAR, different kinds of knowledge are factored apart and represented separately. Task domain knowledge is encoded as concept definitions and features. For example, the predicate Void(player.suit) is defined as (Not (Exists card (Cards-in-hand player) (In-suit card suit))). and In-suit is marked IsPredicate and IsComputable to represent the fact that a Hearts player can test a given card to tell if it belongs to a given suit. BAR currently has 38 of FOO’S 112 concept definitions. Program transformations are encoded as rules that do not mention the task domain and may in fact be useful in more than one domain. BAR’S 60 rules include unfolding and folding definitions [Darlington 761, approximating a predicate P probabilistically as (High (Pr P)) (“P is likely”), and using a combinatorial formula to compute the probability that two subsets randomly chosen from a given universe will be disjoint. Knowledge about what is operational is encoded as general facts. such as “A computable function of evaluabie arguments is evaluable.” represented in BAR as (@lsComputable @ IsEvaluable’) -matches+ @IsEvaluabie. Finally. knowledge about means-ends problem-solving is encoded procedurally. Thanks to this factoring of knowledge. domain knowledge and program transformations can be added to BAR without modifying the problem-solving procedure itself. Unlike GPS, BAR uses no built-in difference table that indexes directly from differences to operators. Instead. BAR analyzes its transformations to determine what kinds of differences they might help reduce. Some of BAR’S knowledge about analyzing transformations is illustrated below. “If the left side of a transformation rule contains an argument absent from the right side. the rule deletes the sub-expression corresponding to that argument.” For example. one such rule approximates a binary ordering as the corresponding unary predicate: this rule is described in the pattern language as (@isOrdering ?x ?y) -+ (@IsPredicate ?x). BAR deduces that applying this rule to an expression has the effect of deleting the sub-expression bound to the pattern variable ?y. This knowledge is used in operationaiizing the advice “avold taking points” as “play a low card”: to evaluate the expression (Lower (My-card) (Card-played-by ?q)) (“my card is lower than the card to be played by player ?q”), BAR decides to eliminate the unevaluabie second argument by approximating the expression as (Low (My-card)). “If the right side of a rule occurs as a sub-pattern of the left side, the rule extracts the sub-expression corresponding to the sub-pattern.” Such a rule is useful when the current expression does not match the current goal but one of its sub-expressions does. One such rule is described in the pattern language as (@lsQuantifier ?x ?S [?P FreeOf ?x]) --, ?P. This rule eliminates a quantifier when the quantified predicate is independent of (FreeOf) the quantified variable. “If both sides of a rule have the same top-most symbol, the rule restructures the expression to which it’s applied.” Such rules are useful when the arguments of the current expression cannot be rewritten to match the corresponding arguments of the current goal. One such rule transposes the arguments of a symmetric relation: (@ IsSymmetric ?x ?y) ---, (@lsSymmetric ?y ?x). “If the right side of a rule is a constant, the rule evaluates the expression to which it’s applied.” One such rule recognizes that a conjunction contaming a false term is false: (And -. False --) -+ False. BAR knows that evaluation IS one kind of simplification. and applies simplification rules whenever it can. 4. Abstract plans BAR faces a combinatorially explosive problem space. involving transformation sequences dozens of steps long, with several rules applicable at each step. To cut down the combtnatorics. BAR searches in a simpler abstracted space for plans to reduce differences. This restricts the set of transformations considered to those that lead to a solution in the abstracted version of the problem space. A transformation rule (L . ..) + (R . ..) or a fact (L . . . -matches+ (R . ..) in the original space IS abstracted as L ---, R ) in the sampler space. To rewrite an expression (f . ..) to match a pattern (g ...)s BAR searches in the abstracted space for paths of the form f-rule+ . . . -rule+ g. Such paths are only a few steps long. and are found by depth-first search. Each such path constitutes an abstract plan for rewriting (f . ..) to match (g . ..). In the course of applying such a plan. BAR must solve subproblems that arise when a proposed rule does not apply to the current expression. It does so by recursively invoking itself with the left side of the rule as the pattern to match If the CAR (top-level function) of the expression matches the CAR of the pattern. BAR recursively transforms the arguments of the expression to match the arguments of the pattern: if this fails, it tries to restructure the expression, e.g., by transposing arguments, or looks for alternative plans to get to the goal. BAR’S path-finder avoids returning more than one plan with a given first step. so a successful route to the goal may follow a plan part-way and then switch to a new plan in mid-stream. When BAR finds only one plan, it applies it automatically, but when it finds more than one, it scores them based on such factors as the number of steps in the plan and how many of them require subgoaling. It then asks the user to select among them. Alternatively BAR can try them in order of decreasing score, but at present this tends to cause a runaway search. Of course, an abstract plan may fail; this occurs when the current expression cannot be transformed to match the next step in the plan. To reduce the incidence of plan failure. BAR uses a simple form of learning by experience: it records how often it tries to match the left side of each rule, and how often it succeeds. One result discovered by this simple tuning mechanism is that it is easier to make an expression runtime-evaluable, i.e., to match the pattern @IsEvaluable, than to actually evaluate it, i.e., to match @IsConstant. This knowledge influences the order in which the abstracted space is searched. in the hope that of all the plans that start the same way. BAR will find tne one most likely to succeed. The combmation of abstract planning and recursive descent means that BAR plans tn a hierarchy of abstraction spaces [Sacerdoti 741 corresponding to successive nesting depths of sub.expressi0n.s. 5. A short example This section shows how BAR operationalizes the condition “player p, is void in suit so.” [Mostow 831 presents a longer example exhibiting complexities omitted here. The initial problem is > Goal: Rewrite (Void po so) to match @IsEvaluable BAR finds 6 abstract plans to get from Void to @IsEvaluable. and rates them: Plan Pl (rated 226): Void -unfold-, Not -fold+ Disjoint -matches+ @IsEvaluable Plan P2 (rated -91): Void -Fact44 @IsEvaluable -Rule227+ Implied- by Plan P3 (rated -167): Void --Rule234-, WasDuring -Rule227+ Implied-by -Fact4+ @IsEvaluable Plan P4 (rated -108): Void -Rule1 93+ -Rule 173+ True -matches+ @I sEvalua ble Plan P5 (rated -127): -Fact44 @IsEvalua ble Void In -Rule405+ High Plan P6 (rated -240): Void --Rule3294 @IsQuantifier -Rule1 55+ @IsPredicate -Rule227+ Implied- by -Fact4+ @IsEvaluable In this example, the user selects plan P5. In more detail, plan P5 is 1 ! Approximate predicate pro Rule405: ?P -+ (High (Pr ?P)). babilistically using 2? Make the result evaluable. Plan P5 is rated low because Rule405 is marked as an approximating method. (Plan Pl leads to the same result by a different route; plans P2 and P3 lead to alternative solutions described later; P4 and P6 are dead ends.) The ““’ denotes the fact that producing the transformation step 1 can be applied Immediately. (Void P, so) + (High (Pr (Void p, so))) The “?” denotes the fact that step 2 requires some subgoaling. The problem is that although the predicate High IS marked IsComputable, there is no general definition for Pr. This problem leads to the subgoal > Goal: Rewrite (Pr (Void p, so)) to match @IsEvaluable BAR finds one plan to get from Pr to @IsEvaluable and tries it without asking: 2.1? Use formula for probability that two randomly chosen subsets of ?U will be disjoint: (Pr (Disjoint (SetOf x ?U ?P) (SetOf y ?U ?a))) - (Pr-disjoint-formula (# (SetOf x ?U ?P)) (# (SetOf y ?U ?a)) (# ?U)) 281 2.2? Make arguments of formula evaluable. 2.2? Make arguments of formula evaluable. To apply step 2.1, BAR must first reformulate Void in terms of Disjoint: > Goal: Rewrite (Void po so) to match (Disjoint (SetOf ?x ?U ?P) (SetOf ?y ?U ?a)) BAR finds 5 plans the top-ranked one: to get from Void to Disjoint: the user accepts 2.1 .I! Unfold definition of Void: (Void po so) - (Not (Exists c (Cards-in-hand po) (In-suit c so))) 2.1.2? Fold into instance of Disjoint(Sl,S2) = (Not (Exists x Sl (In x S2))) To do step membership: 2.1.2, BAR must reformulate In-suit in terms of set > Goal: Rewrite (In-suit c so) to match (In x S2) BAR finds 4 plans top-ranKed one: to get from In-suit to In. and the user accepts the 2.1.2.1! P c - (In c (SetOf y S Py)), where c has type S This “jittering” rule [Fickas SO] rewrites an arbitrary predicate P on an object c in terms of set membership: c satisfies P if c belongs to the set of things that satisfy P. Here this rule produces the transformation (In-suit c soI - (In c (SetOf y (Cards) (In-suit y so))) This enables the previous plan to be completed: 2.1.2! Fold into instance of Disjoint: (Not (Exists c (Cards-in-hand po) (In c (SetOf y (Cards) (In-suit y so))))) - (Disjoint (Cards-in-hand po) (SetOf y (Cards) (In-suit y so))) To match the left side of argument must be rewritten: the disjoint subsets rule. the frrst > Goal: Rewrite (Cards-in-hand po) to match (SetOf x ?U ?P) Cards-in-hand to BAR finds applies it: one plan to get from SetOf 2.1.3! Unfold definition: (Cards-in-hand po) - (SetOf x (Cards) (Has p. x)) The disjoint-subsets rule can now be applied: 2.1! Use probability formula for disjoint subsets: (High (Pr (Disjoint (SetOf x (Cards) (Has p, x)) (SetOf y (Cards) (In-suit y so))))) - (High (Pr-disjoint-formula (# (SetOf x (Cards) (Has p, x1)) (# (SetOf y (Cards) (In-suit y so)j) (# (Ca rds)))) and The resulting expression is evaluable except for the predicate Has. which is not computable. BAR finds 3 plans for evaluating the first argument of the formula, the user accepts the top-ranked one, and BAR carries it out: 2.2.1! Fold into instance of computable function: (# (SetOf x (Cards) (Has p (Number-cards-in-hand p3 x1)) - This completes the original plan: 2! Expression is now evaluable: (High (Pr-disjoint-formula (Number-cards-in-hand po) (# (SetOf y (Cards) (In-suit y so))) (# (Cards)))) BAR halts, having achieved its ortginal goal of operatlonalizing the condition “player p, is void in suit so” by reformulating it as there being a high probability that player pp’s hand is disjoint from suit so, based on the size of the hand. A more accurate estimate was derived in FOO by restricting the universe to the set of unplayed cards, but this refinement is omitted here for brevity BAR’S generality is illustrated by the fact that it can operationalize (Void p, so) in very different ways by following different plans. Plan P2 exploits the assumption that player p, plays legally, and leads to the Inference that a player who fails to follow suit must be void. The main subproblem consists of inferring that the condition (Void p, so) is rmplred by the axrom (Legal p (Card-played-by p)) when (Suit-led) = so and (Not (In-suit (Card-played-by pa) so)). Plan P3 is based on remembering past events, and leads to the inference that a player who was void earlier in the round must still be void now. The key subproblem is to prove that a player who is void remains void. This is done by showing that becoming unvoid would require obtaining a card, which cannot occur during the course of play. These solutions can be combined to produce the solution “a player who failed to follow suit earlier is definitely void: otherwise, if few cards are left, the player is likely to be void.” The order of composition is based on the fact that plan P2 leads to a correct semi-decision procedure whose usefulness is extended by P3, while Pl produces a total but approximate decision procedure. BAR lacks an explicit understanding of these factors. and does not combine solutions. 6. Improvements to the problem-solver The construction of a general advice-taker remains a long-term goal posing some difficult problems not addressed here. notably integrating multiple pieces of advice. In addition. BAR inherits many limitations of ~00, such as a far-from-complete set of transformations. However, experience with BAR has exposed several more specific problems worth mentioning here. First, BAR uses a very simple all-or-none model of operationality: either an expression can be evaluated or it cannot. This model ignores such factors as the cost of evaluation, the fact that some expressions can be evaluated sometrmes but not always. and the semantrc relationshrp between the ongrnal advice and tts (perhaps heuristically) operationalized form. 282 Since BAR does not model the relative quality of alternative solutions. it cannot explicitly model the notion of refining a crude solution into an improved one. The refinement paradigm was evident in more than one example generated using ~00. especially in the synthesis of a heuristic search procedure by applying optimizing transformations to an initial generate-and-test search [Mostow 821. A simple way to incorporate this into the means-ends analysis paradigm is to treat refinement somewhat like simplification: designate certain rules as refinement rules and automatically include them in the set of options whenever they appear applicable to the current expression (at the cost of increasing the branching factor). Of course the decision whether to actually apply such a rule benefits rn the case at hand. depends on its relative costs and To make operationalization totally automatic. BAR’S problem. solver must be substantially Improved. One problem is that BAR’S depth-first search strategy is too sensitive to selecting the wrong plan, which causes exhaustive exploration of that branch of the search tree. A more cautious breadth-first strategy might avoid this pitfall. but implementing it would be complicated by the recursive nature of the problem-solver. The current plan-rating scheme suffers from the horizon effect in that problems pushed down to a lower level are ignored. For example, plan Pl receives an unrealistically high rating because it ignores the problem of evaluating the arguments of Disjoint. The ratings could be improved by identifying obstacles to the evaluation of an expression and estimating the difficulty of eliminating each one. Identifying the obstacles seems straightforward, but it is not clear how to estimate their difficulty without actually solving them. A fundamental problem with the planner is caused by using CAR as the abstraction function. This works fine on instances of highly specific functions, like Void, but loses too much information when applied to quantifiers like Exists, logical connectives like Not and And, and general functions like In. This results in the generation of silly plans. This problem might be ameliorated by designing a more discriminating abstraction function. Some such improvement is essential to constrain the generation of abstract plans; although the current branching factor of under 10 is a vast improvement over FOO, the length of the transformation sequences (sometimes over 100) implies that total automation will require an average branching factor very close to 1. 7. Conclusion BAR was implemented to make explicit the goal structure left implicit in FOO, and in this it has largely succeeded. Although it does not totally automate the selection of what operators to apply. it has reduced the branching factor by a factor of 103. without even counting the effect of plan scoring. It has been used to solve about half the examples done with FOO. and has clarified the improvements needed to handle some of the others: a more sophisticated control structure and model of operatronality make explicit the heuristic nature of operationalization. that In short, BAR should be viewed as a problem-solving model rather than as a practical automatic tool, and the broad scope of the operationalization problem makes this likely to remain the case for Improved versions of BAR in the forseeable future. However. technrques developed in BAR may soon find practical use in the area of program transformation. An especially promising applicatron is the problem of reformulating system components described in the Gist specification language [Feather 831 in terms of the data and operations available to their implementations [Mostow 831. Even partial automation of this process could significantly enhance human productivity in developing software [Fickas 821. References [Darlington 761 J. Darlmgton and R. M. Burstall. “A system which automatically improves programs,” Acta lnformatxa 6. 1976, 41-60. [Dietterich et al 821 T. G. Diettench. Bob London. K. Clarkson. and G. Dromey, “Mostow’s Operatronalizer.” in P. R. Cohen and E. A. Feigenbaum, Volume 3 (eds.), Handbook of Artificial Intelligence, Stanford Computer Science Department, Stanford, CA, 1982. In section on Learning and Inductive Inference, available as STAN-CS-82.913/HPP-82.10. [Feather 831 M. S. Feather, Closed System Specifications, 1983. In preparation. [Fickas 801 S. Fickas, “Automatic goal-directed program transformation,” in AAA180, pp. 68-70. American Association for Artificial Intelligence, Stanford University, 1980. [Fickas 821 S. Fickas, Automating fhe Transformational Developmen? of Software, Ph.D. thesis, University of California at Irvine, 1982. [Hayes-Roth 811 F. Hayes-Roth. P. Klahr. and D. J. Mostow. “Advice taking and knowledge refinement. an iterative view of skill acquisition ” in J. A. Anderson (ed.). Cogn/trve Skills and their Acquisif/on, pp. 231-253. Erlbaum. 1981 [Kant & Newell 821 E. Kant and A Newell. Problem solving technques for the design of algorithms, Carnegie-Mellon University Computer Science Department Technical Report CMU-CS-82-145. November 1982. To appear in Information Processing and Management. [McCarthy 681 J. McCarthy, “The advice taker,” in M. Minsky (ed.). Semantic lnformafion Processing, pp, 403-410, MIT Press. Cambridge. MA, 1968. [Mostow 791 D. J. Mostow and F. Hayes-Roth. “Operationaliztng heuristics: some Al methods for assisting Al programming.” in IJCAI-5, pp. 601-609, Tokyo. Japan. 1979. [Mostow 811 D. J. Mostow, Mechanical Transformaf/on of Task Heurlsfxs into Operational Procedures. Ph.D. thesis. Carnegie-Mellon University. 1981. Technical Report CMU- CS-81 - 113. [Mostow 821 D. J. Mostow, “Learning by being told: Machine transformation of advice into a heuristic search procedure,” in J. G. Carbonell. R. S. Michalski, and T. M. Mitchell (eds.), Machine Learning. Palo Alto, CA: Tioga Publishing Company, 1982. [Mostow 831 J. Mostow. “Operationalizing advice: a problem- solving model.” in Proceedings of the Infernational Machine Learning Workshop, University of Illinois. June 1983. [Newell 601 A. Newell, J. Shaw, H. Simon, “Report on a general problem-solving program for a computer,” in Proceedings of the International Conference on Information Processing, pp. 256-264, UNESCO, Paris, 1960. [Newell 791 A. Newell, Reasoning, problem solving and decision processes: the problem space as a fundamental category, Carnegie-Mellon University Computer Science Department, Pittsburgh, PA, Technical Report, June 1979. [Sacerdoti 741 E. D. Sacerdoti, “Planning in a hierarchy of abstraction spaces,” Artificial Intelligence 5, 1974, 115- 135. 283
1983
60
256
AN ANALYSIS OF GENETIC-BASED PATTERN TRACKING AND COGNITIVE-BASED COMPONENT TRACKING MODELS OF ADAPTATION Elaine Pettit and Dr. Kathleen M. Swigger North Texas State University ABSTRACT The objective of this study was a comparison of the effectiveness in adapting to an environment of populations of structures undergoing modifica- tion by four different models: 1) Holland's (2) genetic operator model; 2) a cognitive (statistical predictive) model; 3) a random point mutation model; and 4) a control (non-altered) model. INTRODUCTION Holland (3) has reviewed the prolonged success of lifeforms in adapting to an environment through evolution. The biological organism is faced with testing a large set of possible genetic expressions in its offspring by means of environmental inter- action with a relatively small subset of realized structures (its own genotype). Nonlinearity and epistatic interactions among gene sets complicate the problem of achieving a successful, if not optimal, genetic complement in offspring. Holland has mathematically hypothesized that genetic operators (e.g., crossing-over) exploit the optimization of reproductive fitness (number of offspring) by a means he terms intrinsic parallelism Intrinsic parallelism is the testing of a large pool of schemata (the set of all partitions and combinations thereof of a prototypical structure, or genome) by means of a much smaller subset of realized structures. More simply, consider the structure A consisting of a string of six binary digits, (1 0 1 1 0 1). Each binary digit may be considered to be a detector in an off/on state (i.e., comparable to alleles in genetics). Structure A is a member of a set of structures a which includes all possible strings of six binary digits. There exists a superset E which is the set of strings of length six composed of concatena- tions of (l,O,# }, where # represents a "don't care" position, i.e., its value as 0 or 1 is irrelevant. For example, letE& Ebe (1 0 # # 0 1). This E is termed a "schema," and all possible schemata compose E, the "pool of schemata." The structure A (1 0 1 1 0 1) is an instance of the schemata (1 0 # # 0 1) and (# # # # 0 l), but not of (0 # # 0 # 1). Now consider a structure Environ (0 0 1 0 0 1) which represents the "state" of an environment. Fitness, or performance, may then be defined as the number of matching elements between structure A and structure Environ with a one-to-one correspondence: in this illustration, the fitness of A would be 4. Schemata represent the contribution to fitness of single detectors (i.e., alleles) as well as of combinations of I. Primary Data Structures Environment (ENVIRON) - a list of 12 randomly- selected binary digits (example - (1 1 0 0 0 0 1 11 0 11)). Matrices (MATRICES) - a list of transition matrices for each binary digit in the environ- ment. It is used to simulate a Markov-chain type stochastic variation in the states of the environment. (Example - ((.5 -3) (-2 -9) (.6 -4) . . . (-3 -7)). (-5 .3) would represent the transition matrix y-j+-+- II Primary Measurements I , .3 , .I I I Populations - For each model, a list of 12 sublists of 12 randomly selected binary digits, each of these sublists representing a structure. (Example - ((110011110001) (001001110010) . . . (011111 0 0 0 10 0 ))). ELEMENTS OF MODEL CONSTRUCTION detectors. A subset of structures from cx constitutes a population. It is, by definition, the goal of adaptation to modify these structures in order to optimize the fitness of the population. Holland has shown that the genetic operators of crossing-over, inversion, and, to a limited extent, mutation are highly successful in 1) testing a large number of possible schemata through modifi- cations on a much smaller number of realized structures, and 2) exploiting local optima on the way to achieving the global optimum without becoming entrapped, as opposed to what occurs in the simple hill-climbing technique of heuristic search. The purpose of this study was to garner empirical evidence from an abstract computer implementation of Holland's model with regard to alternative models. LISP was the language of choice due to the power of its list-processing functions. 1) Adaptation - modification of structures to improve performance 2) Performance (fitness) - number of one-to- one matches between a structure in a population and the environment, for a given state of the environment. 3) Averaged Population Performance (population fitness) - the average number of matches From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. over the population of the environment. given transition (Illustration of mechanism): 4) Tracking - the change in averaged popula- tion performance over a stated number of environment transitions. III. Description of Algorithms for Models 1) BETA - the population undergoing the genetic operations of crossing-over, inversion, and mutation (adapted from (2) 1. A. General Model Algorithm: Initialize population, environment, and transi- B. tion matrices. Find the performance for each structure in BETA. Call it MU(i). Define the random variable RAND on (1 ,...,M} by assigning prob- ability MU(i)/M MU(i) to each i%L structure in BETA, where M is the number of structures in the population. Make M trials of RAND, each time storing the structure at position RAND in BETA at successive positions in auxiliary list BPRIME. For each structure in BPRIME apply inversion and mutation, and, if it is a structure in an even-numbered position, cross it with the immediately pre- ceding structure. Set BETA to BPRIME. Apply transition matrices to environ- ment. Repeat all steps except initialization for desired number of transitions. Algorithm for Crossing-Over : Make trial of RAND on 10 I..., 21, where is the number of posi tions in each structure. IfRAND= 0 or RAND=!?, then no crossing-over takes place. Otherwise, take positions 1 through !La RAND of hth structure and append positions RAND + 1 through 9, of (h-1)th structure. Likewise, take positions 1 through RANDOF( h-1)th structure and append positions RAND + 1 through R of hth structure. (Illustration of mechanism): hth Structure: 10 oRAN: 0 01 -- (h-1)th structure: 11110101 After crossing over: hth structure: 1 0110101 (h-1)th structure: 110 0 0 0 01 C. Algorithm for Invers trials of RAND on (1 ion: ,--*I Make 2 R) and designate the outcomes Xl and X2, respectively. Take the segment of the structure from position MIN (Xl, X2) through position MAX (X1,X2), reverse it, and reinsert it into the structure. minx maxX Structure: 10101010 After inversion: Structure: 10 010110 D. Algorithm for Mutation: Make a trial of RAN'D on {l,... ,R} and designate the out- come X. Make another trial of RAND on integers 0,l and designate the outcome CHANGE. Take position X of the struc- ture and. change it to CHANGE. (Illustration of mechanism) : Structure: 0 0 0 0 10 0 x=3; After Mutation: 0010100 Change=1 2) MEMPOP - the "cognitive" model. This routine kept track of the number of times in each position that the structure failed to match the environment for each transition. The routine then calculated the probability from the above frequency that it should "flip" that position, and adjusted each position accordingly. 3) 4) General Algorithm: Initialize population, environment, and transition matrices. Create MEMORY, a list of 12 elements, one for each position of each structure in MEMPOP, and set to 0. For each structure in MEMPOP, do the following for the desired number of transitions: 1) find performance with current environment; 2) for each position that does not match, add 1 to the sum at the corresponding position in MEMORY; 3) for each position in the structure, make a trial of RAND distributed X(0,1), and if RAND is less than the sum at the corresponding position in MEMORY divided by the number of transitions, then flip that bit. Find the averaged population performance over all transitions. RANDPOP - underwent random point mutation by algorithm for mutation above, with number of mutations being generated from POISSON(l). Structures were taken through transitions separately, as in the algorithm for MEMPOP above. CONTROLPOP - underwent no modification. Structures were taken through transitions separately, as in the algorithm for MEMPOP above. TESTING Part I Comparisons Among All Four Models All populations were initialized to the same set of structures and encountered the same initial environment and the same transition matrices. The entire set of BETA (genetic model) was necessarily carried through the transitions at the same time, since structure interaction is inherent in the genetic model. Each structure of the other popula- tions was carried through the transitions singly. 328 The averaged performance of the population was calculated for each transition. Two runs, each of which consisted of twenty-five transitions, were made with the same initial populations and environ- ment but with different transition matrices. Statistical analysis of possible differences in the 25 averaged population performances among the groups was performed under the following assumptions: 1) that the underlying distribution, though most probably binomial, could be approximated by a normal distribution for N=25 (4); 2) that the data values, although obviously correlated for the BETA population, could be treated as essentially independent due to the large number of mutually exclusive stochastic events determing the outcome of the averages. All statistical analyses were run on SAS. A parametric ANOVA was performed to test for differences in the mean of the averaged popula- tion performance among the 4 groups. Means were then grouped into equivalence classes by Duncan's multiple range test. To test for the validity of the conclusion without the above two aSSmptiOnS, a Kruskal-Wallis nonparametric analysis of variance by rank test was also run. Alpha was -05 for all tests. Table 1 - Summary Data on Comparison of Models Test 1: Values in Transition Matrix Set to Simu- late Environment with Frequent Perturbations Results: a = -05; Ho: The means of the averaged population performance of all groups are not significantly different. (Ho for nonparametric test: The rank scores are not significantly different.) Run 1: Matrices = ((.5.2) (-9.1) (-4.8) (-2.3) (.7.6) (.5.5) (-8.5) (.1.2) (0.0 .3) (.4.7) (-8.4) (.5.9)) ANOVA: AMONG MS WITHIN MS 2.08965 0.344845 F VALUE PROB>F 6.06 0.0009 Reject Ho. Duncan's MRT: (Boxes indicate equivalence classes.) -q /I 5y~;~0 ;y;yop :;p$ Kruskal-Wallis (Chisqr approximation): GROUP SUM OF SCORES BETA 1254.00 MEMPOP 1751.00 CONTROLPOP 1064.50 RANDPOP 980.50 CHISQR=16.99 PROBXhisqr = -0007 Reject Ho- Run 2: Matrices = ((-4.6) (-3.9) (-5.8) (-7.5) (-2.1) (.3.3) (-4.6) (.5.1) (.4.7) (.4.2) t.9.91 (.3.5)) ANOVA: AMONG MS WITHIN MS 2.62818 0.278441 F VALUE PROB>F 9.44 -0001 Reject Ho. Duncan's MRT: 4 El L:::::Lpop z:;: 5:::;8 Kruskal-Wallis (Chisqr approximation): GROUP SUM OF SCORES BETA 979.00 MEMPOP 1255.50 CONTROLPOP 1805.50 RANDPOP 1010.00 Chisqr = 20.86 PROBXhisqr = -0001 Reject Ho- Test 2: Values in Transition Matrix Set to Simulate Environment in Steady State With Rare Perturbations The same procedure as in Test 1 was used with the same initial populations but with a different initial environment and different transition matrices. Matrices = ((.l.l) (-1.1) (-9.9) (-1.1) (-9.9) (-1.1) (-1.2) (-1.1) (-1.1) (-9.1) (.1.9)) Results: a = .05; Ho: same as Test 1 ANOVA: Among MS=1.09459 Within MS =0.23934 F Value = 4.57 Prob>F=O.OOSO Reject Ho. Duncan's MRT: -ziz--l~l]\ Kruskal-Wallis (Chisqr approximation) : GROUP SUM OF SCORES BETA 1197.00 MEMPOP 1743.50 CONTROLPOP 1040.00 RANDPOP 1069.50 CHISQR = 15.32 PROBXhisqr = -0016 Reject Ho. Test 3: Transition Matrices Set to Simulate Environment in Absorbing State (Steady State With No Fluctuation) The same procedure as in Test 2 was followed but with transition matrices as follows: Matrices = ((00) (00) (00) (00) . . . (00)) 329 Results: Cl= .05; Ho: Same as Test 1 ANOVA: Among MS=41.4344 Within MS=0.366581 F Value = 113.03 Prob>F=O.OOOl Reject Ho. Duncan's MRT: Group I Mean1 MEMPOP BETA 8.0604 7.9040 I CONTROLPOP 6.1632 RANDPOP I I 5.4596 Kruskal-Wallis (Chisqr approximation): GROUP SUM OF SCORES BETA 1843.50 MEMPOP 1868.50 CONTROLPOP 997.50 RANDPOP 340.50 CHISQR=77.23 ProbXhisqr = .OOOl Reject Ho. DISCUSSION With regard to obtaining and processing infor- mation from an environment with constant, albeit rare, perturbation, the genetic model performed significantly less well than the cognitive model. The mean of its averaged population performance was no better than that of a control population with no tracking mechanism. Rank scores from the Kruskal-Wallis test reflected a similar relation- ship. Likewise, random point mutation aid not produce results significantly different from the genetic or control models. In a stochastically fluctuating environment, the cognitive model tracked significantly better than all the others, but at a very high price in computational over- head (see following discussion). In the third test, that of an environment with no fluctuation, the genetic model performed significantly better than the control and random models, and as well as the cognitive model. Hence, the genetic model did not appear to track an environment very well on a short-term basis, but in matching a highly stable environment, it per- formed, on the basis of structural information, as well as the model possessing the highest level of information concerning each individual bit. As can be noted from the software algorithms, computa- tional overhead for the cognitive model may be assessed as manipulating n structures times x bits per structure. For most applications, x is greater than n, giving a rough complexity estimate of O(n**2). This complexity is reflective not only of arithmetic operations, but of calls to the random number generator (also roughly n**2). The genetic model, however, retains and manipulates information on the basis of the structures them- selves, using the "reproductive" fitness as the selection criterion for proportion of inclusion in the next generation. Computational complexity is thus roughly O(n), and the number of calls to the random number generator is also O(n). As Holland mathematically deduces, the genetic opera- tors manipulate the structures so as to 1) increase the averaged population performance, and 2) to test large instances of schemata. The first con- sequence is structure-based information storing and updating. The second is bit-based, without demanding individual bit updating, or even a query concerning the status of individual bits in an applications-oriented measure of performance (one would simply measure the "performance" in terms of a desired property of the system). Therefore, the results of these tests provide empirical evidence in support of Holland's proposals. Part II. Comparisons Between and Within the Genetic and Cognitive Models of Changes in Fitness over Transitions (No Environ- mental Fluctuation) The purpose of this section of the experiment was to see if and how the genetic and cognitive models produce an increase in fitness over time. For all the following tests, BETA and MEMPOP were set to the same initial population. Transition matrices were set to zero. Two runs of fifty transitions each were made with two different initial environments. The averaged population performances were calculated for each transition as in Part I. The same statistical assumptions as in Part I were made concerning the independence, normality, and, initially, the homoscedasity of the underlying sample distribution. Table 2 - Comparison of Mean Averaged Performance Between Genetic and Cognitive Models Test 4: Comparison of Mean Averaged Population Performance Over 50 Transitions Between Genetic and Cognitive Models CX= -05; Ho: The means of the averaged pop- ulation performances of the two models are the same (or rank scores are same for the nonparametric test) . Results: Run 1: ANOVA: Among MS=10.0109 Within MS=0.410682 F Value = 24.38 Prob>F=.OOOl Reject Ho. Duncan's MRT: Wilcoxon 2-Sample Test (Normal Approximation): GROUP SUM OF SCORES BETA 3335.50 MEMPOP 1714.50 5 =5.5840 PROB>/5/=0.0000 Reject Ho. Run 2: ANOVA: Among MS=3.6864 Within MS=0.619144 F Value=5.95 PROB>F=0.0165 Reject Ho. Duncan's MRT: Wilcoxon 2-Sample Test (Normal Approximation): GROUP SUM OF SCORES BETA 2936.50 MEMPOP 2113.50 B = 2.8334 PROB>/5/ = 0.0046 Reject Ho. 330 Conclusion: and its immediate predecessor as above. Reject Ho in both runs and conclude that the averaged population performance of the genetic model is significantly greater than that of the cognitive model. From the 2 runs of 50 transitions each, the change in performance between successive transi- tions was calculated by subtracting from each population performance (except the first) the value of the one immediately preceding it (98 observations). a= .05; Ho: The means of the change in population performance between successive transitions are the same for both models (or rank scores of differences are same). Table 3 - Comparison of Mean Changes for Genetic and Cognitive Models Test 5: Comparison of Mean Change in Performance Between Successive Transitions for Genetic and Cognitive Models Run 1: ANOVA: Among MS=.01805 Within MS=.246721 F = .07 PROB>F=.7874 Do Not Reject Ho. -- - Group BETA Mean Difference .071429 MEMPOP -044286 Wilcoxon a-Sample Test (Normal approximation): GROUP SUM OF SCORES BETA 2491.00 MEMPOP 2360.00 5 = -4618 PROB>/%/ = .6442 Do Not Reject Ho. --- - Run 2: ANOVA: Among MS=.0372255 Within MS=.422695 F = .09 PROB>F = -7673 Do Not Reject Ho. --~ - Group Mean Difference BETA .06 MEMPOP -02 Wilcoxon 2-Sample Test (Normal approximation) : GROUP BETA SUM OF SCORES 2500.00 MEMPOP 2351.00 % = -5258 PROB>/B/ = .5990 Do Not Reject Ho. --- - Conclusion: Do not reject Ho. Conclude that mean change in performance between successive transitions is the same for the genetic and cognitive models. the From the above analysis, it was noted that the MS within the groups was greater than that between them. In order to study the variations of the differences in performance within each group, a series of paired-sample t-tests were run on BETA and MEMPOP separately for both sets of the SO- transition data. In computing change over time, "lag" is defined to be the number of transitions between the two environment states for which the population performances are being subtracted. For example, lag 1 is the difference between a value Table 4 - Differences in Performance Within Groups Test 6: Paired Sample t-test for Individual Group Change in Performance (Lag=l) Ct= .05, Ho: The mean difference between successive performances is 0 for the group under consideration. GROUP RUN 1 RUN 2 BETA t=1.16 Prob>/t/=.2518 t=.91 Prob>/t/=.3696 MEMPOP t=.56 Prob>/t/=.5788 t=.18 Prob>/t/=.8585 Conclusion: For all cases, do not reject Ho. Con- clude that the mean change in performance between immediately successive transitions is not signifi- cantly different from 0 for both the genetic and cognitive models. Test 7: Effects of Different Lags on Significance of Change in Performance (Paired Sample t-test) Ct= -05, Ho: A lag of X units has no effect on change in performance between transitions (change = 0). (lag steps not given were not signi- ficant up to the final one) RUN 1 RUN 2 GROUP LAG# t Pr>/t/ LAG# t Pr>/t/ BETA 3 1.57 -1226 3 1.73 .0907 5 2.14 .0382 S 12 1.97 .0567 15 2.06 .0467 S MEMPOP 21 -1.56 -1301 9 - -69 .4955 23 -1.24 -2267 20 -2.30 -0291 s (in negative direction) *S means significant Conclusion: Conclude that a smaller amount of lag time is required for the change in performance to be significant for BETA than for MEMPOP. Also, the t-statistic was in the positive direction for BETA indicating net improvement, whereas it was negative for MEMPOP indicating net deterioration. SUMMARY AND CONCLUSIONS This experiment consisted of two major sections: 1) model performance in stochastically- changing environments with varying rates of fluctuation, and 2) model performance in a non- changing environment. In the first case, it was concluded that the genetic model performed poorly in tracking the changing environments even when the rate of fluctuation was slow. Indeed, it aid Rio better than simple chance (the control model). Likewise, the random point mutation model fared no better, although it has been used to introduce stochastic and, as is hoped, eventually progressive change in the performance of adaptive systems, in fields from traditional biological evolutionary theory to checkers-playing programs. The cognitive model, as was expected, performed significantly better in tracking the environment, but at an unrealistic computational cost. It would appear that none of these models offers any gain over established AI routines in tracking independent components of a 331 stochastic environment. Many real-world stochastic environments, how- ever, are not composed of independently-varying components. Information about the change in one component can be used to predict caused or corre- lated changes in another. The description of these relationships is the goal of empirical sciences, and may be considered in this example to be a statistical extension of the cognitive model. Yet it is often not the individual components or even their relationships which concern us, but their collective mean states over time, which may be termed the "pattern" of the environment. Whereas the cognitive model may perform "well enough" on a component-sampling basis for pattern tracking, the genetic model actually outperforms it when the pattern is completely consistent over even a few transitions (note the results in Part II). At the same time, the genetic model does not discard sources of new schemata when an optimum is obtained, allowing for recovery over another set of absorbing transitions when the pattern is altered in a realistic, correlated fashion. In contrast, it is possible for a component-sampling model to lock into a present optimum that was maintained over suffi- cient transitions: the probability for change would become miniscule. In particular, note the results of the Part II lag tests -- the initial performance level was maintained with very little variation. One area of current software development for which these findings have special significance is that of voice recognition and synthesis. Again, the concern is with an overall pattern in the collective mean states of phonemes, not in their individual variation. A population of structures consisting of variations in specific phoneme enunciation (alleles) might be maintained and genetically manipulated to quickly and accurately match incoming phonemic constructs. A wider range of enunciation variability could be tolerated with a level of accuracy at least as good as individual component sampling at much less the computational overhead. There are also implications in the field of population and evolutionary biology for these findings. It has recently been postulated that evolution occurs in spurts, rather than with a steady progression (1). "Missing links" have been found for very few species. As can be discerned from the lag data below for the genetic group (BETA) large lljumps" in performance change occur over very small transition increments. A steady state is then achieved, with another "jump" then occurring. RUN 1 Lag No. t Pr /t/ 1 1.16 -2518 3 1.57 .1226 7 1.66 -1036 jump r 8 1.73 -0906 10 1.97 .0563 11 1.94 -0593 12 1.97 -0567 RUN 1 (continued) Lag No. t Pr /t/ c 13 1.91 -0644 jump 14 1.94 -0607 15 2.06 -0467 RUN 2 Lag No. t Pr /t/ l-l .91 -3696 jump L P 3 1.73 -0907 jump t 5 2.14 .0382 A caveat must be issued concerning the high level of abstraction of this model and its use of the simplest, and by no means only, genetic opera- tors. However, observance of such a sequence of change even at this level is significant and warrants further refinement and testing of the model. Secondly, most species in a consistent environment evolve toward "specialization", in which they occupy a very narrow niche. Catastro- phic perturbations (as in the fluctuating environ- ments of Part I in which components are often inversed) doom the most highly specialized. In the above genetic model, information processing and integration into the population base was too slow to track such an environment above mediocre performance. In a consistent environment, how- ever, the genetic model evolved toward and main- tained a set of highly similar, suitably matching structures. Random point mutation proved fruit- less in all instances. This model may thus provide the basis for a suitable abstraction of population evolution and niche occupation. In his book, Holland lists numerous other areas of possible application, as well as specific, concrete models that have been developed based on genetic operators. It is the authors' hope that the presentation of software and empirical data supporting Holland's model may motivate further interest in this area. REFERENCES (1) (2) (3) (4) Gould, Stephen Jay, Ontogeny and Phylogeny, Cambridge, Mass.: Harvard University Press, 1977, 501 pp. Holland, John H., "Adaptation," in Progress in Theoretical Biology, V 4, ed. by R. Rosen and F. M. Snell, NY: Academic Press, 1976, PP- 263-293. Holland, John H., Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, Ann Arbor: The University of Michigan Press, 1975, 183 pp. Quirin, William L., Probability and Statistics, New York, Harper SC Row, 1978, p. 281 332
1983
61
257
A DOUBLY LAYERED, GENETIC PENETRANCE LEARNING SYSTEM Larry A. Rendell Department of Computing and Information Science University of Guelph Guelph, Ontario, Canada. NlG 2Wl ABSTRACT The author’s original state-space learning system (based on a probabilistic performance measure clustered in feature space) was effective in optimizing parameterized linear evaluation functions. However, more accurate probability estimates would allow stabilization in cases of strong feature interactions. To attain this accuracy and stability, a second level of learning is added, a genetic (parallel) algorithm which supervises multiple activations of the original system. This scheme is aided by the probability clusters themselves. These structures are intermediate between the detailed performance statistics and the more general heuristic, and they estimate an absolute quantity independently of one another. Consequently the system allows both credit localization at this mediating level of knowledge and feature interaction at the derived heur istr level. Earlv experimental results have been encouraging. “As predicted by the analysis, stability is very good. I INTRODUCTION In [71 the author described a successful state-space learning system (PLSI). Given a set of features, PLSI will decide which are useful, and incrementally and efficiently determine the weight vector for the heuristic, a linear evaluation function. Heuristics have been repeatedly generated which solve the fifteen puzzle, and which are locally optimal in the weight space. This-is a new result. The underlying concept is a refinement of Doran and Michie’s C4l search oenetrance which measures solution density in feature space (see Fig. I). Derived from- repeated observations of this search statistic, the evolving evaluation function is designed not to estimate path distance remaining to the goal from a state A, as is often the case, but rather to predict the probability of A’s eventual solution participation. For a given feature space volume r , the elementary penetrance p(r) H, P> depends on the particular problem instance P and heuristic H used for solving. However the true penetrance ^p(r> is the ideal, defined for-eadth-first search of all possible problem intances combined. Of course - finding these values is infeasible, but penetrance learning systems (PLS’s) estimate them. As modelled by Buchanan et al [31 and exemplified in Fig. 2, a typical learning system (LS) comprises: 1. An algorithm schema for some primary task, the performance element PE PI 2. Some separable control structure S for the PE 3. The critic, whose role is “[to analysel the current abilities of the performance element” PELSI, by assessment of the over all effectiveness of S and sometimes also by localization of credit within S 4. The learning element LE, designed to improve S according to recomendations of the critic 5. A blackboard on which to store S and other information between activations of these algorithms . In a penetrance learning system (PLS), the blackboard retains knowledge of the relationship of penetrance to feature values in a bartition of the- feature space, a I set of regions of various sizes and shapes (Figs. 1,2). Ina somewhat simplified form [c.f. 7,81, this cumulative region start Figure 1. Localized penetr ante discriminates. Developed nodes from search tree T are maDDed into feature space F. The whole space penetrakce of T is 3/6, whereas (e.g.1 three localization in F gives values: p(rl ,T) elementary penetrance = l/l, P(rq ,T) = l/2, and p(r3,T) = l/3. 343 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. set can be written as C = ( (r, ^p,) ) where r is the feature space volume and 6, is its estimated true penetrance. This structure C is the essence of heuristic knowledge for the PLS, both determining the control structure for the solver, and also being the foundation on which the adaptive elements build. (See [7l for details. > In the original system PLSl there is no critic; no overall measure of solver capability is required. Rather the learning element operates locally on the cumulative regions. The learning element of PLSl is detailed in [6,7,81 and briefly described in the following: The LE includes two major algorithms, the clusterer and the regresser . The clusterer modifies the cumulative region set C using solver statistics. As information accumulates over several iterations, the regions are incrementally resolved into smaller units just adequate to express known relationships e true penetrance In addition, the estimates of C are revised each iteration: fresh, elementary values are unbiased (normalized to true penetrance) , then averaged in. The result is an effective economy, a refinement of Samuel Is II91 signature tables which did not alter data categories automatically. Al though the clusterer is data-driven, its product C is unsusceptible to noise, because of the stochastic nature of the process. From C = 1 (r, &) 1 the feature coefficient vector b is determined by stepwise regression of log $r on the centroid of r. This is a selective procedure which screens features and expresses their relative importance . The resulting evaluation function H = exp(b.f) predicts true penetr ante. -- Rather than being confined to these linear combinations, features must ultimately be merged more flexibly if the system is to attain full generality (e.g. see Berliner [:ll>. In [81 feature interaction is accommodated using piecewise linearity, localizing b to individual regions. However, handling -nonlinearities presents a severe problem of stability unless true penetrance estimates are quite reliable. To tackle this difficulty, the genetic model is applied. A genetic or reproductive plan is an inherently -1 scheme which canmiciently locate global optima. The theory was developed by Holland L’51, summarized and exemplified in Brindle [2l, and successfully incorporated in a learning system by Smith CIOI. Shown in Fig. 3, the present extension PLS2 can be considered as a second layer LS which activates its performance element PLSI multiply, with a different control structure each time. Essentially PLSI operates in parallel, each process using an individual cumulative region set of the competing population. (The blackboard of PLS2 is the union of PLSI blackboards.) The critic and learning component of PLS2 make corn par i son s and improve the population. outlined in C61, PLS2 is developed below. Briefly Figure 2. Penetrance learning system PLSl. The simplest. control structure for a solver is a vector of weights for features of an evaluation function. The essence of PLS knowledge is a set of feature space penetrance regions, used to determine this heuristic and to accumulate experience. PLSl PLS2 Figure 3. The second layer of learning. PLS? activates PLSl with different region sets, which it continually improves. 344 II THE CRITIC -- As explained above, the cumulative region set is essentially the control structure for the solver. In PLS2 a different region set guides each solver activation (Fig. 31, and a resulting performance statistic provides a basis of comparison to measwe the relative worth of each structure. By taking advantage of performance patterns across region sets, credit can be localized to individual regions. Suppose that K cumulative sets C (k) are available (I 5 k I K). Let each C (k) determine control for the solver in K separate runs, each attempting problem instances of difficulty d. (A llpresolver” decides on an appropriate value of solution depth d for otherwise random selection of training problems. > Already calculated as a by-product by the solver (but not used in PLSl) are two overall measwes of performance: the solution length L and number of nodes developed D; these are stochastic functions of C tk) and d. Choose some functional F of L and D (e.g. simply D> l Then, given a cumulative region set R = Clk) (k SK), define its coarse utility l~( R 1 -- to be F(d) / F( R, d), where F is the mean value of F over all K sets. Values of p will therefore center around one. 1-I is a typical fitness measure for a reproductive plan (see Section III>; however the critic is designed to extract more information than this. To quantify credit localization, (pairwise) comparison of regions is used. Consider first the simplification in which each cumulative region has a counterpart in every other parallel set; i.e. feature space rectangles match precisely, and only true penetrance estimates differ. An example of this situation is shown in Fig. 4. For each focus region R, and comparison region Q, define -- -- the likeness(R, Q) = 1 - (^PR- E(J) / max($R ,$, where ^pR and p. are the true penetrance estimates of R and Q. In the general case of dissimilar rectangles, this definition becomes asymmetric: Now a focus region R is compared with each cumulative set Q = C lk). Depending on the extent of it.sTntersection with R, each Q E q contributes to a varying degree to the overall penetrance similarity of Q to R. This likeness measure, together with region set performance , provide veiled information relating Er to its accuracy. Assuming coincidence of true penetrance with optimal utility, R ER will tend to improve the performance p( R ) if R is accurate, and regions similar to R will be likely to aid their sets. Consider again Fig. 4. Each cumulative set Ctk) will have determined a heuristic for attempting training problems of similar difficulty, and the resulting coarse utilities might be as indicated. We can conclude that regions in set Ct2) have generally better estimates than those in C (1) or C13) . If, for each focus region R, the coarse utilities are plotted as a function of likeness, a peak will likely occur at the point of greatest accuracy, and if a curve is fitted to the data, the resulting utility at likeness = 1 will estimate the pure utility of R E R, rather independently of other regions in R . Various outcomes are illustrated in Fig. 5, in which parabolas are fitted. The first (a) would never occur since it suggests that regions from a single location of feature space in every cumulative set are nearly exclusively responsible for the overall performance of the heuristic. The other two diagrams are more likely: There is little utility attributable to a single focus region R (R could be quite accurate while its neighbours vitiate the heuristic) m Fig. 5(b) indicates a situation in which R is quite inaccurate but parallel sets typically do not suffer as severe a deficiency. Fig. 5(c) might result if all regions in the set containing R are fairly sound. Here competing sets are often poorer ; in particular many rivals of R are less accurate (and therefore unlike R). f2 i @ (1) wi t (a) f t P 1 = 1.0 P 2 ’ = 13 P 3 = 0,7 Figure 4. Different. region sets cause varying performance u. In this simplified picture (of just 3 parallel sets Cik) ) rectangles always match and only true penetrince estimates (shown inside) differ. 345 One would expect generally poor fits for most individual focus regions but significant knowledge overall, since many focus regions are assessed (all JK of them, with an average of J in each set and K sets). The precise mechanism for this information extraction is straightforward: Perform a regression, then compute the fine utility (R) of focus region R which cas 2 defined variously but similarly as plr or 2 (K) 1-I tr g 9 where 1-1' is the predicted utility at likeness = 1, r is the correlation coefficient, and g is some function increasing monotonically with argument K (population size). V = 1 indicates uncertainty while larger or smaller values of V show confidence in greater or lesser quality. The fine utilities corresponding to Fig. 5 (b) and (c) are 0.7 and 1.2, respectively, with the simpler definition above. III THE REPRODUCTIVE PLAN To utilize this information, PLS2 incorporates a novel version of Holland's genetic plan c51. A genetic algorithm uses parallel structures called genotypes; each determines the phenotype, a set of attributes characterizing an ZZXaiZl. The fitness of an individual is its performance in the mment. This measure is used to favow selection of successful parents for new so offspring, the whole popurdtion incrementally evolves toward greater utility. Theory C5l shows that knowledge about desirable phenotypes is advantageously stored in the population itself, implicitly in the surviving genotypes. Reproductive plans can locate global optima efficiently C5,lOl. In the design of an artificial genotype, one issue is whether to use many loci (variables) with few alleles (values), or vicZYirsa. To allow greanptability, binary alleles are generally chosen, although this can cause problems [2, pp. 24-26, 44-471. This issue dissolves in PLS2: First, population variance is aided by the learning element of PLSI, so the alleles need not be binary. Instead the allele set is the continuous interval co, 11, representing true penetrance. Secondly, the loci of a PLS2 genotype correspond to feature space coordinates; however they are compressed into unordered volumes, and their number depends on current knowledge refinement. The genotype is the cumulative region set. Since regions estimate true penetrance, and only within their own boundaries, these ltloc-ilt are independent, thus precluding another problem: inefficiency due to loci interaction C2, p. 1701. The consequent phenotype, the feature coefficient vector, can still be nonlinear (when a high order model is used-this vector is regionalized -- see C81). 0 -tb) ’ + ’ Figure 5. Credit localization. Examinination of region similarity (abscissa) across multiple cumulative sets allows extraction of patterns in performance (ordinate). To optimize the population, a reproductive plan includes algorithms for parent selection and offspring generation. Parent selection is natural: Each individual has an associated fitness measure such as the coarse utility of Section II. This simply defines a probability distribution for candidates so that successful parents are favoured. The fitness measure is normally a property of the individual as a whole; typical applications do not admit localization of credit since loci usually interact. However the PLS2 genotype -- the cumulative region set -- allows the assignment of fine utility (Section II). The genotypes of a population are both repositories of knowledge and also sources of subtle variation for exploration. To achieve balance in this mutual role, offspring generation typically adopts biological operators such as mutation (unarv> and crossover (binary or bisexual). In contrast, PLS2 is K-sexual (where K is population size); all regions are merged into a a single set before selection. Moreover, alleles (true penetrance estimates) are untouched (it is the lower level learning element PLSl which alters these); regions are simply chosen stochastically as loci/alleles according to their fine utility. To create reasonable offspring region sets, the utility of every candidate region (its probability of selection) is continually adjusted to account for the current feature space cover V defined by the regions so far selected (candidates are less useful if they overlap much of V). This formation of a new set halts when a V is attained which is close to the maximum. Hence a new individual arises which has a high likelihood of penetrance accuracy. K' genotypes are created in this manner to replace all of the old popuY.ation. 346 IV PRELIMINARY RESULTS AND CONCLUSIONS __I__- REFERENCES The second layer system PLS2 has been programmed and testing has begun. In particular, comparisons are being made with the already successful PLSI [71. Perhaps the most obvious improvement is in terms of stability. Whereas PLSI is sensitive to various run parameters and appropriate training problems, PLS2 overcomes any abnormalities by immediately dismissing aberrant information. Additional time costs appear small. Investigations are continuing to discover the effects of varying system parameters such as population size, and in particular to determine the ability of PLS2 using more highly interacting features, which PLSI cannot handle alone. In summary, PLS2 is promising from several viewpoints: As support for PLSl, PLS2 improves region accuracy and stability, important for feature interaction [81. As a genetic algorithm, PLS2 seems especially efficient because of the independence and flexibility of individual loci (regions). These characteristics avoid typical problems which can degrade efficiency, and also aid credit localization which usually improves it. Despite the absence of explicit genetic operators, the ability to discover global optima may be retained since PLSI already provides (controlled) population variance. Finally, as a scheme for knowledge accumulation, PLS2 benefits from information layering. A mediating structure, the cumulative feature space region set (storing conditional probability of success in task performance), allows both credit localization and variable interaction: The elements of this set, the regions, are independent of one another, but determine the task heuristic which can incorporate feature nonlinearities. ACKNOWLEDGEMENT This research was supported in part by a grant from the Research Advisory Board of the University of Guelph. 1. 2. 3. 4. Doran, J. and Michie, D., Experiments with the graph-traverser program, Proc. Roy. Sot., & --- 294 (1966) 235-259. 5. Holland, J.H., Adaptation in Natural and Artificial Systems, University of Michigan Press, 1975. 6. Rendell, L.A., 7. 8. 9. IO. Berliner, H., On the construction of evaluation functions for large domains, Proc. Sixth IJCAI, 1979, 53-55. -__I- Brindle, A., Genetic algorithms for function optimization, C.S. Department Report TR8l-7 (PhD Dissertation), University of Alberta, 1981. Buchanan, B.C., Johnson, C.R., Mitchell, T.M., and Smith, R.G., Models of learning systems, in Belzer, J. (Ed.), Encyclopedia of Computer Science and Technology 11 (1978T?r51. -- - State-space learning systems using region&ized penetrance, Proc. Fourth Biennial Conference of the Canadrw for Computational Stud= of Intell= 1982; 150-157. - - Rendell, L.A., A new basis for state-space learning systems and successful implementation, Artificial InteTligence 20, 4, (July 1983). Rendell, L.A., A learning system which accommodates feature interactions, Proc. Eighth IJCAI, 1983. Samuel, A.L., Some studies in machine learning using the game of checkers II -- recent progress, IBM J. Res. and Develop. 11 (1967) 601-617. ---- - Smith, S.F., A learning system based on genetic adaptive algorithms, PhD Dissertation, University of Pittsburgh, 1980. 347
1983
62
258
Generating Hypotheses to Explain Steven Salzberg Yale University Department of Computer Science New Haven, Connecticut Abstract Learning from prediction failures is one of the most important types of human learning from experience. In particular, prediction failures provide a constant source of learning. When people expect some event to take place in a certain way and it does not, they generate an explanation of why the unexpected event occurred [Sussman 1975) [Schank 19821. This explanation requires hypotheses based on the features of the objects and on causal relations between the events in the domain. In some domains, causal knowledge plays a large role; in some, experience determines behavior almost, entirely. This research describes learning in intermediate domains, where causal knowledge is used in conjunction with experience to build new hypotheses’%nd guide behavior. In many cases, causal knowledge of the domain is essential in order to create a correct explanation of a failure. The HANDICAPPER program uses domain knowledge to aid it in building hypotheses about why thoroughbred horses win races. As the program processes more races, it builds and modifies its rules, winning horses. and steadily improves in its ability to pick 1. Introduction This research models a person learning in a new domain, in which he creates rules to explain the events in that domain. When rules succeed, he confirms or strengthens already held beliefs; but when rules fail, he can learn by explaining the failures. For example, a stock market analyst creates- new rules about how the market works when an investment fails to yield a profit. The new rules are based on the relevant features of companies in which he is investing. He determines which are the relevant features by querying his knowledge about how each feature affects a company’s performance. His knowledge of causality allows him to determine in advance, for some features, whether they will predict improving or declining performance; for example, he knows if an oil company makes a bii strike in an offshore well, its stock will probably go up. The causal knowledge involved here is that the earnings of oil companies are directly related to the amount of new oil they discover. New oil leads to increased earnings, which in turn cause stocks to go up. A similar type of hypothesis generation occurs in the domain of horse racing. Here, whenever a horse wins (or loses) contrary to expectations, new rules about why J horse wins or loses are generated by the racing handicapper, who bases horse. his rules on the data available to him about the “This work was supported in part by the Air Force Office of Scientific Research under contract F49620-82-K-0010. 2. The Effect of Doxuain Characteristics Regardless of the domain in which one is learning to be an expert, certain rules about learning apply. One needs to know, among other things: 1. What features exist for the objects in the domain 2. What features are relevant; i.e., which ones affect performance and how strongly 3. How the features interrelate (causal knowledge) Features will be loosely defined here as anything a person can notice consistently when it is present; for example, a nose is a feature of a face because we have the ability to see it when we see someone’s face (even though we don’t have to notice it). For some domains, the knowledge of the above items is much easier to obtain than others. For the current project a domain was chosen in which this knowledge is relatively easy t,o obtain. 2.1. Norse racing Thoroughbred horse racing is a domain where the relevant features are pretty clear. As for area (1) above, what featsures exist, most racing experts (according to two experts I consulted) get all their data from the Daily Racing Form, a daily newspaper which lists, for each horse in a race, essentially all the possibly relevant data about a horse. Area (2) is a little more difficult. By questioning racing experts, though, it is possible to throw out at least some of the data that appears in the Form, because they ignore some of it. As for the third and most difficult area above, the causal knowledge of the domain, again experts provide some clue. (Causal knowledge will be discussed in detail later.) For example, two items of data are how a horse finished in its last race and the claiming price of that race. (The claiming price is the value for which any horse in a given race may be purchased. IIigher claiming prices indicate better, or in other words faster, horses.) If a horse won its last race, it is quite likely that the claiming price of that race was lower than the current one, because an owner is likely (indeed, required in some cases) to move his horse up in value when it is doing well. As will be shown later, such causal knowledge will be of use in restricting the possible hypotheses that may be generated to explain a given failure. 3. Causal Knowledge When a person fails at a prediction, he generates new hypotheses to explain the situation where he failed, and uses the features mentioned in the hypotheses to index t!lc situation [Schank 19821. A central question of much research on 352 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. explanation is how great a role causal knowledge plays [Schank & G. Collins 19821. Psychological evidence indicates that although people do causal reasoning all the time, they often are not very good at it [A. Collins 19781. Because of this, Lebowitz (1980) claimed that such analyses should not be used by a computational mode!. However, although Allan Collins’ work shows that people are not very good at causal reasoning about scientific problems, people are good at many other kinds of everyday causal reasoning, such as explaining why a waitress acts nice to customers (she wants a good tip), why people go swimming in hot weather (to cool off), and so forth [Schank & Abelson 19771. The extensive use of causal reasoning, and of previously constructed causal chains in everyday life, must be considered in the construction of any computational model of explanation. Causal knowledge can be used in at least two distinct ways to aid in building explanations: it could be used as a filter to throw out irrelevant hypotheses after they are generated, or it could be used as a filter on the relevant features that the generation process will use. HANDICAPPER uses the latter method, which saves it the trouble of generating useless hypotheses. 4. Generating Hypotheses Given any expectation failure, a mode! of human explanation must decide which features of the situation are appropriate to use in an explanation. Without causal knowledge, the only way a computer can generate hypotheses is to use all the features in every possible combination, since it has no way of deciding which are relevant and which are not. Previous models of learning have had some success with this approach because they pre-selected the relevant features [Lebowitz 19801, or because they only allowed no more than one or two features to be used in hypothesis generation [Winston 19751, thus finessing the combinatorial explosion problem. HANDICAPPER, though, does not know ahead of time which features to use in generating a hypothesis (although the Daily Racing Form does provide some constraints). Now it turns out that if one uses causal knowledge to generate an explanation, the question of which features are relevant is answered automhtically. An explanation, which will rely on basic knowledge of how the domain works, will only include those features which are necessary to make the explanation complete. Irrelevant features will not be used in the explanation, and hence they will never be used as indices for the expectation failure. The following example illustrates the importance of generating hypotheses based on explanations: horse A has just won a race, and a comparison of A with the horse which was Horse Decorous predicted to win shows that A has ten features that the other horse lacks. Assume further that one of these features is the ability to run well on muddy tracks. One explanation which someone might offer is that A won because it was a good mud runner. The next thing to ask is, was the race run on a muddy track? If the answer is yes, then the explanation is complete, and the other nine features can be disregarded. If the track was not muddy, on the other hand, this feature cannot be part of any explanation in this instance. The important thing is that one does NOT want to use all ten features in every combination to explain the failure: the thousand or more new conjunctions which would be created as a result would be mostly useless and, what’s worse, completely absurd as a model of how experts explain their failures. The fact is that people are usually content with only one explanation, and if not one then two or three possibilities at most will suffice. Again, the crucial fact to recognize is that, as difficult as explanation is, it solves other difficult problems about hypothesis generation that cannot be handled by simple “inductive generalization,” particularly the “what to notice” problem [Schank & G. Collins 19821. 4.1. The necessity for causal knowledge: an example There are cases when causal reasoning is absolutely essential to explain a failure. The primary example for my purposes here is a thoroughbred race run at Belmont track on September 29, 1982. The IIANDICAPPER program and a racing expert whom I asked both agreed that the horse Well I’ll Swan should win that race, and in fact the expert said that it should “win by a mile.” The actual result of the race was that the horse Decorous won and Well I’ll Swan finished second. The most significant features of each horse are summarized in the Table 1”. One possible hypothesis using simple feature analysis, is to assume that the dropping down feature of Well I’ll Swan is the reason for the loss, since this feature is the only difference between the horses. Anyone who knows about horse racing, however, knows that dropping down is usually a good feature, so this explanation is not adequate. \Vhen the expert was told that, in fact, Decorous won, he re-examined Well I’ll Swan to explain why that horse did not win. What he noticed, and what he said he had been suspicious of in the first place, was that Well I’ll Swan looked too good. The reason is that if a horse is winning, as this horse was, then he should be moving *“Claimed” in the just before the race table means the horse was purchased by a new owner Well I'll Swan Features I Finished well in last I two races (4th, 1st) Finished well in last two races (lst, 3rd) I Claimed in last two races Claimed in last race Table 1: Compar Dropped down $15,000 from last race son of two horses 353 up in value, which this horse was not. Furthermore, this horse had just been claimed for $15,000 more than the current race, and it was quite possible that it would be claimed again, which means the owner would take a fast $15,000 loss. If the horse was doing well in more expensive races, it would make no sense to drop him in value. The conclusion this expert made was that there must be something wrong with this horse -- it was sick or hurt -- and the owner probably wanted to get rid of it while he could still make some money. Given this conclusion, one would be wise not to bet on the horse, or at least to be wary of it. The rules of the domain which were necessary here included: 1. Higher claiming prices indicate better horses. 2. Races with high claiming prices than races with lower claims. have 1 arger purses 3. The goal of an owner is to make money on his horse, hence to enter the horse in the most expensive claiming races which the horse has a chance to win. 4. If a horse is sick or injured an owner may try to hide this fact until he can sell the horse. This kind of complex reasoning is the most likely way to arrive at the correct explanation for the win of Decorous over Well 1’11 Swan. The good features of the former were shared by the latter, and the only way to predict the results otherwise would be to suppose that moving up in claiming value was better than dropping down, which we have already said is wrong. In order to be suspicious of a horse like Well I’ll Swan, the racing expert must know a great deal about what claiming a horse means, what changes in claiming price mean, and why owners make these changes. This is an example of why simply knowing what features are involved is not sufficient to explain the results of the race. HANDICAPPER uses the rules enumerated above, in conjunction with the features of the horses in question, in generating its explanation (or new hypothesis) of why IVell I’ll Swan did not win this race. 5. What HANDICAPPER does The HANDICAPPER program learns the rules of horse racing by predicting winners based on its hypotheses and creating new hypotheses when its predictsions fail. The program starts out with the ability to recognize about 30 different features and some simple causal knowledge about each one. The basic mechanism for generating hypotheses is a function for comparing two horses, the one which actually won a race and the one which was predicted to win (if the horse picked to win does win, nothing new is learned, but old rules are strengthened). The first step in building new hypotheses is to look at all the features that are different for the two horses. Next, the program consults its domain knowledge about racing to ascertain whether each feature might be part of an explanation. HANDICAPPER knows for most features whether they are good or bad, and the few it does not know about will not be eliminated in the initial phase of hypothesis generat,ion. HANDICAPPER also knows some more complex rules about racing, and these rules are used here to notice contradictions. Examples include: Rulel: Finishing poorly (worse than 4th) in several races in a row should cause an decrease in the claiming value of a horse. Rule2: Vinning a race easily (a ‘wire to wire' victory, where a horse is in the lead from start to finish) should cause an increase in the claiming value of the horse. If these or other causal rules are violated, then the horse is looked upon suspiciously, as possibly not “well meant” (e.g., the race might be fixed). In the example with Decorous and Well I’ll Swan, where a horse was both doing well and dropping down, the simultaneous presence of both of these features caused the program to notice a violation of its causal rules. The explanat,ion it generated has already proven useful in making predictions about later races: in particular, the program processed another race with a horse similar to Well I’ll Swan, in that it had been claimed recently and dropped down. The program correctly predicted that this later horse would not win (and the horse it actually picked did, in fact, win the race). When a causal violation occurs, the features responsible for that violation become the sole hypothesis for the prediction failure. Lacking such a violation, the program generates several hypotheses, one for each combination of the feature differences between the predicted winner and the actual winner. These hypotheses are then used to re-organize memory, where the features become indices to a prediction. Similar sets of features on future horses will thereby result in predictions that reflect the performance of horses with those features which the program has seen in the past. The knowledge of which features are good and which are bad is enough to constrain the number of such combinations enormously. Before such knowledge was added to the program, it generated on the order of 5000 new hypotheses for the first three races it predicted, but by adding this knowledge the number of hypotheses was reduced to less than 100 (the number of combinations grows exponentially, and the knowledge of whether features were good or bad reduced the number of feature differences by about half). The addition of causal reasoning rules reduced the number further to only about eight hypotheses per race. 6. Conclusion HANDICAPPER currently makes predictions on the dozen or so races in its database which agree in general with experts’ predictions. After seeing the results of all the races, it modifies its rules so that it predicts every winner correctly if it sees the same races again (without, of course, knowing that it is seeing the same races). The source of all new hypotheses is failure, and in particular the failure of old theories adequately to predict future events. It is this fact which makes failure such an important part of the learning process. Without failures, there never is a need to generate new explanations and new predictions about how the world works. By reorganizing memory after each failure, the HANDICAPPER program makes better and better predictions as it handicaps more ~ACCS. This memory-based approach to organizing expert 354 knowledge should prove useful, and in fact indispensable, in a program attempting to become an expert in any domain. The issues raised here demonstrate the increasing importance of causal knowledge in reasoning about complex domains, particularly the need to consult domain knowledge so as to avoid generation of incorrect explanations. Acknowledgements Thanks to Larry Birnbaum, Gregg Collins, and Abraham Gutman for many useful comments and help with the ideas in this paper. Bibliography Collins, A. Studies of plausible reasoning. BBN report 3810, Bolt Baranek and Newman, Inc., Cambridge, MA. 1978. Lebowitz, M. Generalization and Memory in an Integrated Understanding System. Ph.D. thesis, Yale University. 1980. Schank, R. Dynamic Memory: A theory of reminding and learning in computers and people. Cambridge: Cambridge University Press, 1982. Schank, R. & Abelson, R. Scripts Plans Goals and Understanding. New Jersey: Lawrence Erlbaum Associates, 1977. Schank, R. & Collins, G. Looking at Learning. Proceedings of ECAI-82, pp. 11-18. Sussman, G. A Computer Model of Skill Acquisition. New York: American Elsevier. 1975. Winston, P. Learning Structural Descriptions from Examples. In P. Winston (Ed.), The Psychology of Computer Vision, New York: McGraw-Hill, 1975. Winston, P. Learning New Principles from Precedents and Exercises. Artificial Intelligence 19:3 (1982), 321-350. 355
1983
63
259
LEARNING : THE CONSTRUCTION OF A POSTERIOR1 KNOWLEDGE STRUCTURES Department of Computer and Paul D. Scott, Commun i ca Michigan Abstract This paper is a critical examination of both the nature of learning and its value in artificial intell igence. After examining alternative definitions it is concluded that learning is in fact any process for the acquisition of synthetic a posteriori knowledge structures. The suggestion that learning will not prove useful in machines is examined and it is argued. that i ts main application in practical Al sys terns is in providing a means by which a system can acquire know1 edge which is not readily formalizable. Finally some of the imp1 ications of these conclusions for future Al research are explored. 1: Introduction In recent years machine learning seems to have undergone someth i ng of a renaissance. One manifestation of this is the recent publication of a book surveying the field (Michalski, Carbonell and Mitchel 1, 1983) Most of this book is devoted to reviewing what has been accomplished but, in what he clearly i ntended to be a provocative paper, Simon (1983) has rai sed a number of fundamental quest ions regard i ng the nature and value of machine learning. In this paper I attempt to provide answers to these questions. In particular I shal 1 try to define what learning is, why it is of great importance in artificial intelligence, how it relates to other branches of the subject and what these answers imply regarding future research in machine learning. 2. What Is Learninq ? When people use the term ‘learning’ in ordinary conversation they run little risk of being misunderstood. It is therefore somewhat surprising that A.I. researchers have had so much difficulty in arriving at a satisfactory definition. The usua 1 explanation of this phenomena is the claim that the everyday use of ‘learning’ is very general and imprecise and *This work was supported by NSF grant # MCS- 82039% tion Sciences, University of actually refers to a heterogeneous collection of behaviors. There is much truth in this but the fact that people apply the same term to all these behaviors suggests that they have something fundamental in common. The most widely accepted broad definition of learning within the A.I. community appears to be one relating it to improved performance. For example :- “Learning is any change in a system that al lows it to perform better the second time on repetition of the same task or another task drawn from the same population” Simon (1983) This is a functional definition. That is it defines ‘learning’ in terms of what it achieves rather than how it achieves it. Thus it is really a definition of the purpose of learning a 1 though it could be used as a definition of ‘learning’ if it is interpreted as meaning that any process which achieves that end is an example of learning. Unfortunately if used in this way the definition is unsatisfactory in two respects. First it includes many behaviors which one would not want to classify as learning. For example, if I replace the old blade in my razor with a new one then I will perform the task of shaving better. It would be unreasonable to say the act of changing the blade constitutes an instance of learning every time I do it. Thus there are many changes which lead to improved performance which are not examples of learning. The other problem with the definition is that it excludes many behaviors which would normally be classified as learning. For example, by the time the reader reaches the end of this sentence he or she will have learned that the author was born on a Tuesday. I t is difficult to envisage a situation in which this knowledge could be used and hence it is clear that in this trivial example learning has nothing whatsoever to do with any performance. It could be argued that common usage is at fault and that the involuntary acquisition of useless pieces of information should not be classed along with with the other behaviors which 359 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. are regarded as examples of learning. However it it is not always clear when information is useless. In making a brief visit to a strange town I may happen to notice the location of the pub1 ic 1 ibrary. Under normal circumstances this is the involuntary acquisition of use 1 ess information. If however a short time later someone stops me in the street and asks how to get to the library then the .information suddenly becomes useful and wi 11 certainly improve my performance in answering the enquiry. Thus one cannot restrict the term ‘learning’ to the acquisition of useful knowledge or skills since the utility is not determined at the time learning occurs. The functional definition of ‘learning’ in terms of improved performance does not therefore correspond to common usage of the term. One is therefore faced with a choice. One could decide that, when used in an A.I. context, ‘learning’ is a technical term whose meaning is determined by the functional definition rather than by common usage. Alternatively one could seek another definition. The former course is possible but still presents difficulties. For example Samuel’s checker player’s (Samuel, 1963) ability to beat strong opponents was degraded by allowing the system to learn while playing weak opponents. At the same time it was certainly learning how to beat weak opponents. Thus by the functional definition it is learning with respect to one performance but not with respect to another drawn from the same population. However, even if such difficulties can be overcome, there are strong arguments in favor of looking for alternative definitions. The first is that it is foolish to ignore common usage. For a concept to survive in everyday communications it must have some utility. Hence it is reasonable to suppose that there is some underlying unity about the collection of behaviors which are normally classified as ‘learning’. The second reason is that the quest for such a definition, even if unsuccessful, should tell us something about the nature of learning processes. 3 . The Organization Of Experience We can identify certain common aspects of the systems and behaviors to which the term ‘learning’ is applied. First, any behavior described as learning seems to involve the notion that an event to which the system has been exposed influences the potential behavior of that sys tern subsequent to that event. Furthermore, saying that a system learns appears to carry the implication that the system has the ability to store and retrieve information. The term ‘learning’ cannot be meaningfully applied to a system without such abilities. ‘Learning’ is associated with the storage rather than the retrieval of that information but there is a strong implication that the storage will be arranged in such a fashion that the information can be retrieved in appropriate ci rcumstances. These two attributes are closely connected since it is usually understood that the influence of an event on subsequent behaviour is mediated by retrieval of some information derived from the occurrence of that event. Taking these attributes of learning together allows us to propose an alternative definition of learning. It is any process in which a system builds a retrievable representation of its past interactions with i ts environment. The term ‘retrievable’ should be understood to mean that the sys tern itself can both access and interpret the representation. This definition may be more succinctly expressed :- Learning is the organization of experience Note that this definition, in contrast to the one discussed earlier, says nothing whatsoever about the purpose of learning. What it does do is establish a strong link between learning and knowledge representation. This relationship can be made more explicit by the following equivalent definition :- Learning is any process through which a system acquires synthetic a posteriori knowledge. In general a system wi 11 also have analytic knowledge such as rules of inference and synthetic a priori knowledge such as given facts about its environment. Both of these are supplied by the system designer. 4. Why Is Learning Important In Artificial Intel 1 igence ? The notion that knowledge representation is an essential part of any intelligent sys tern is firmly established. Hence if we define learning as a process for building a representation of its environment then its potential utility is obvious. However learning is not the only way in which such a representation can be acquired. It can be explicitly supplied to the system as what is from the sys terns point of view synthetic a priori knowledge. Many people seem to regard learning as an essential component of being intelligent. Hebb (1942.1949) distinguishes two meanings of the term ‘intelligence’. ‘Intelligence A’ is the ability to acquire intelligent performance whi le ‘Intelligence B’ is the intelligent performance itself. Unti 1 recently, research in artificial intelligence has been very much more cancer ned with performance than with acquisition of that performance. That is in Hebb’s terms it should ,perhaps be called ‘artificial intelligence B’. The dominant research strategy for many Of years was to try to discover ways in which an adequate representation of a specific problem domain can be constructed by the system designer and then utilized by the system to exhibit a desired type of performance. Thus the intelligent performance of a program is due to the combination of the designer and his program. The ‘intelligence A’ resides in the designer while the ‘intelligence B’ emanates from the program. This suggests that a machine which is incapable of learning may be intellectually inferior to one which can. However, Simon (1983) notes that human learning appears to be an extremely tedious and inefficient process. It takes a long time to transfer expertise from one person to another. In contrast the knowledge structures of one computer program can be passed on to any number of other systems by means of a simple copy operation. Simon argues that this difference suggests there is not much point in trying to endow computers with human-like learning abilities. Why not just program the knowledge straight in? This is an attractively simple argument. It does however carry the imp1 icit assumption that ‘just programming’ will be a more effective way of endowing a system with knowledge than requiring it to go through some learning process. It is not at all obvious that this assumption is true. Suppose one wished to construct an expert system for some domain. At present the process of developing an expert system requires a very 1 arge investment of effort by at least two people: one who already has the experti se that is to be incorporated into the expert system (the ‘domain specialist’) and one who can perform the programming necessary to incorporate it (the ‘knowledge engineer’). The process involves an attempt to discover the rules and knowledge used by the domain specialist and then embody them in a form acceptab 1 e to the computer (usually as production rules). There are two obvious limitations to this approach. First of all it is very difficult and laborious. Secondly much of the domain specialist’s expertise may not be introspectively available. He or she may be able to describe the general procedure he or she uses but is probably as incapable of explaining the reasons for exploring a particular possibility as a grandmaster is incapable of supplying an algorithm which replicates his chess playing ability. Note that the treat i on of a human expert requires much less effort. The conventional education process requires that the teacher possess a sound know1 edge of his subject but pedagogical ski 11 appears to be much 1 ess important. Even a poor teacher usually succeeds in teaching his students a substantial fraction of what a very good teacher could have imparted. This is because the conventional education process is based on the premise that students are intelligent and therefore do almost all the work in building relevant menta 1 representations themselves. Furthermore the conventional educational process normally involves the student in gaining exper i ence by working on ’ toy problems’. Much of the expertise he acquires is something that the teacher is incapable of formal izing. By repeated interaction with the problem domain the student will construct his own set of concepts which are useful for solving problems in that domain. If this were not true it would be just as easy to program a machine to write computer programs as it is to teach an introductory programming course. The development of systems which could build their own representations would thus provide remedies to both the limitations of current expert system building techniques. First, s i nce the system could build i ts own representation, the initial knowledge suppl i ed need only be as complete and precise as that normally supplied in a c 1 assroom situation. Second, the system cou Id extend this representation through exper i ence to include concepts which i ts human teachers were unable to supply. Of course this process of learning by experience may be as time consuming for a machine as it is for a human. Certainly it makes sense to initialize the system with as much expertise as possible. However, as Simon notes, the machine expert has one enormous advantage over a human expert. The unteachable expertise which it acquires can be readily communicated to another machine. One simply makes a copy (or a thousand copies) of the final state of the original expert. In contrast, every human expert must individually go through the process of learning by experience. Vi ewed in this way it becomes clear that the copy process is complementary rather than an alternative to the learning process. Only the latter can create new knowledge structures. Thus one situation in which learning is the only way a system can acquire a particular representation is when that knowledge cannot be readily formalized. Obviously it must be possible in principle to formalize the knowledge since otherwise a machine would not be able to acqui re it. However it may be inordinately difficult or even impossible for a human to create such a forma1 representation explicitly. Simon himself points out another situation in which learning would be necessary. This is the situation in which the structure of the system acquiring the representation is so complex that it is not practicable to modify it explicitly even if one knows what knowledge the system needs. Thus it can be seen that learning is important in artificial intelligence because it provides a way in which a system can acquire knowledge that cannot be obtained by other means. What forms of knowledge have this characteristic is an important open question which deserves the serious attention of the Al research community. 5 . The Growth Of Knowledge Structures Defining learning as a process for acquiring a posteriori knowledge 1 eads to some important conclusions regard i ng the place of machine learning within artificial intelligence. Computer scientists have long realized that there is an intimate relationship between data structures and the procedures which operate on them. We may apply this principle to knowledge structures. There are two classes of operation which are applied to structures which represent knowledge. One class we call ‘knowledge users’. These are the parts of the system which make use of the know1 edge in the course of performing some task. The other class we call ‘knowledge builders’. These are the parts of the system which construct and maintain knowledge structures. The great effort which has gone into developing means of knowledge representation over the last decade or so has largely concentrated on the know1 edge users. That is know1 edge representation schemes have been developed with the aim of making them as usefully accessible as possible to the parts of the sys tern responsible for its ultimate performance at its assigned task. This approach has yielded many valuable results. However it has only been possible because the system designers themselves played the role of knowledge builders. Since they were invariably much smarter than the knowledge user portions of their creations it made sense to construct representations for the latter’s convenience alone. However, if the Al community is going to pay serious attention to the problem of learning then new approaches to know 1 edge representation are necessary. In devising a representation scheme for a system which learns we must consider how it can be made easy to build and maintain as well as easy to use. Unlearnable representations, however versatile will have to be discarded. Does this mean that we have to reject the powerful knowledge representation schemes that we already have? Not necessarily. The knowledge builders need to be able to manipulate such representations. If they are complex then the knowledge builders will need to be endowed with a fair bit of expertise about them. In other words they need metaknowledge. The simpler the knowledge structures the less metaknowledge is required. Clearly this is a situation where important trade-offs must be made at the design stage. If YOU want a really simple learning component you have to be satisfied with a correspondingly simple knowledge representation. The art is going to be devising representation schemes which are sufficiently simple and uniform to make building and maintaining them easy yet sufficiently rich for the knowledge users to perform their task successfully. It makes sense to avoid devising a knowledge structure such that the task of bui lding and maintaining it is significantly harder than the task which the sys tern is actual ly intended to perform. Interestingly it may be that this is not possible if the intended task is relatively simple. As the required repertoire of the system gets larger so the possibility of learning being worth the effort may increase. This may be why such versatile systems as human beings make so much use of learning. If learning does prove most useful in very versatile systems it will dramatically change some widespread assumptions about how to make machines learn. With a few notab 1 e exceptions such as Lenat’s AM (Lenat, 1976) most learning programs are based on the principle of improving their performance at some task by repeatedly attempting to perform that task. A learning system which forms part of a system with an enormous repertoire of potential tasks should acquire knowledge which is relevant to many of these tasks during the performance of other tasks. Ultimately it may have to acquire knowledge for knowledge’s sake (as Lenat’s AM does) rather than for its relevance to a specific task. This possibility is discussed in detai 1 in Scott and Vogt (1983) . Another interesting consequence of this definition of learning is that some branches of artificial intelligence which have not traditional ly been regarded as connected with learning are shown to be components of the learning process. For example truth maintenance systems (Doyle, 1979) can be viewed as systems which ensure the consistency of acquired knowledge structures. I suspect that some readers may object that this type of process is not learning at all but reasoning or deduction or some simi lar concept. The point is that such processes cannot be meaningfully separated from ‘pure learning’. Much human learning also makes extensive use of reasoning. Consider for example the we1 1 known section in Plato’s Meno in which Socrates teaches a slave boy a special case of Pythagoras Theorem. Plato himself was so struck by the role of reason i ng in human learning that he proposed that all learning is merely a form of recollection. It may be that in future artificial intelligence researchers will have to pay as much attention to inductive logic (Burks, 1977) as they have previously paid to deductive logic. Most reviews of learning attempt to construct a taxonomy for the classification of the varied attempts which have been made to construct machine learning systems. Our definition of learning as a process for acquiring knowledge certainly suggests at least one way of categorizing such systems . They can be classified in terms of the knowledge structures built. Some authors (eg Michalski, Carbonell and Mitchel 1, 1983) have suggested what appear to be classification schemes of this nature but on closer examination they appear to be based not on the knowledge structure but rather the data structure used to represent the knowledge. I shall clarify this distinction by means of an example. Samuel’s checker player (Samuel, 1963) is of ten classed as a program which learns by adjusting coefficients in a polynomial expression. This is a perfectly correct characterization of the data structure that Samuel used. However classifying Samuel’s program in this way leads to the paradoxical situation in which authors dismiss polynomial adjustment as too simplistic a view of learning and yet continue to find Samuel’s program of interest. The reason is that what makes Samuel’s program so interesting is not the fact that he uses a polynomial nor directly the way he adjusts it. Careful examination of Samuel’s paper reveals that the polynomial is actually a model of the opponents behaviour based on the a priori assumptions that the opponent will use minimax and try to maximize piece advantage. Thus treating Samuel’s program as a knowledge acquisition system we should say that the know1 edge structure acquired is a representation of the opponent’s checker playing behaviour. My proposal is thus that we should classify learning programs in terms of what kind of knowledge is being acquired. This in no way detracts from the value of the traditional classifications in terms of either know1 edge representation format or extent of assistance provided to the system by a teacher. I t is orthogonal to these. Strangely it is often far from obv i ous exactly what know1 edge is being acquired. For example I doubt if many people recognize what exactly Samuel’s program is really learning on a single reading of his paper despite the strong hint given by its behaviour against weak opponents. The overall conclusion to be drawn from this paper is that learning and know1 edge representation are so closely connected that one cannot study the former without reference to the latter. Interest in machine learning seems to be growing rapidly. Hence I anticipate that over the next few years we are going to witness some radical rethinking of the way we view the problem of providing a system with knowledge it needs to solve problems in its allotted domain. 6. References Cl1 Burks, A.W. I’ Chance, Cause and University of Chi cage Pr ess. 1977 Reason” [2] Doyle, J. “A Truth Ma i ntenance Sys tern” Artificial Intelligence, Vol 12 , pp 231-272. (1979) [31 Hebb, 0.0. “The Effect of Early and Late Brain Injury Upon Test Scores and the Nature of Norma 1 Adult Intel 1 igence” Proc. Amer. Phi 1. sot., vol 85., PP 275-292. (1942) c41 Hebb, D.O. “The Organisation of Wiley, New York. 1949 Behav i our” L-61 C7! U31 L-91 Michalski, R.S., Carbonel 1, J.G. and Mitchell, T.M. “Mach i ne Learn i ng”, Tioga Publishing Co., Palo Alto. 1983 Samuel, A.L. “Some Stud i es in Machine Learning using the Game of Checkers” In “Computers and Thought” Eds E. Feigenbaum and J. Feldman, McGraw-Hill, New York. 1963 Scott, P.D. and vogt, R.C. “Know 1 edge Or i ented Learni ng” P oceed ings IJ CAI 1983 Simon, H.A. “Why Shou 1 d Machines Learn?” In Michalski, Carbonell and Mitchel I (1983) [51 Lenat, D. “AM : An Artificial Intelligence Approach to Discovery in Mathematics as Heuristic Search”, Doctora I dissertation, Stanford University, July 1976.
1983
64
260
SCHEMA SELECTION AND STOCHASTIC INFERENCE IN MODULAR ENVIRONMENTS Paul Smolensky Institute f or Cognitive Science University of California. San Diego C-015 La Jolla. CA 92093 AB!3TRAtX Given a set of stimuli presenting views of some environment, how can one characterize the natural modules or “objects” that compose the environment? Should a given set of items be encoded as a collection of instances or as a set of rules? Res- tricted formulations of these questions are addressed by analysis within a new mathematical framework that describes stochastic parallel computation. An algorithm is given for simulating this computation once schemas encoding the modules of the environ- ment have been seIected. The concept of computational tempera- ture is introduced. As this temperature is Iowered, the system appears to display a dramatic tendency to interpret input, even if the evidence for any particular interpretation is very weak. IIltrodoction Our sensory systems are capabIe of representing a vast number of possible stimuli. Our environment presents us with only a smaI1 fraction of the possibilities; this se&ted subset is characterized by many regularities. Our minds encode these regu- larities, and this gives us some ability to infer the probable current condition of unknown portions of the environment given some Iimited information about the current state. What kind of regu- larities exist in the environment, and how should they be encoded? This paper presents preliminary results of research founded on the hypothesis that in real environments there exist reguIarities that can be idealized as mathematical structures that are simpIe enough to be anaIyxabIe. Only the simpIest kind of reguhuity is considered here: I will assume that the environment contains modules (objects) that recur exactly, with various states of the environment being comprised of various combinations of these modules. Even this simplest kind of environmental regularity offers interesting Iearning problems and results. It also serves to introduce a general framework capable of treating more subtle types of regularities. And the probIem considered is an important one, for the delineation of moduIes at one level of conceptual representation is a major step in the construction of higher Ievel representations. This research was supported by a grant from the System Develop ment Foundation and by contract NO001679-C-0323, NR 667-437 with the Personnel and Training Research Programs of the Office of Naval Research. During the 1981-82 academic year, support was provided by the Alfred P. Sloan Foundation and Grant PHS MH 14268 to the Center for Human Information Processing from the National Institute of Mental Health. To analyze the encoding of modukuity of the environment, I will proceed in three steps. First, I will describe a genera1 infor- mation processing task, completion, for a cognitive system. Then I will describe the entities, schemas, I use for encoding the modules in the environment and discuss how they are used to perform the task. FinaIly I will offer a criterion for how the encoding of the environment into schemas should be done. The presentation will be informal; more precise statements of definitions and results appear in the appendix. Completions and Schemas The task I consider, completion, is the inferring of missing information about the current state of the environment. The cog- nitive system is given a partial description of tkat state as input, and must produce as output a compIetion of that description. The entities used to represent the environmental modules are called schemas. A given schema represents the hypothesis that a given module is present in the current state of the environment. When the system is given a partial description of the current state of the environment, the schemas look at the information to see if they fit it; those that do become active. When inference from the given information to the missing information is possible, it is because some of the schemas represent moduIcs that incorporate both given and unknown information. Such a schema being active, i.e. the belief that the module is present in the current environmental state, permits inferences about the missing infotma- tion pertaining to the module. (Thus if the given information about a word is ALG##fTHM, the schema for the module algo- rithm is activated, and inferences about the missing letters are pos- sibIe.) There is a probIem here with the sequence of decision- making. How can the schemas decide if they fit the current situa- tion when that situation is onIy partially described? It wouId appear that to really assess the relevance of a schema, the system wouId first have to fill in the missing information. But to decide on the missing portions, the system first needs to formulate beliefs concerning which moduks are present in the current state. I call this the schema/inference decision problem; what is desired is a way of circumventing it that is general and extremely simple to describe mathematicaIIy. Rarmony and Compn tationd Temperature Before discussing the algorithm used to tackle this problem, Iet us first try to characterize what outcome we would like the algorithm to produce. The approach I am pursuing assumes that the best inference about the missing information is the one that best sarisfies the most hypotheses (schcmas). That is, consider all possi- 378 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. ble responses of the system to an input (a partially specified environmentaI state). Each such response involves (a) a decision about exactly which schemas are active, and (b) a decision about how to specify all the missing information. Given such a response, ‘I propose a measure of the internal consistency that computes the degree to which the hypotheses represented by the active sehemas are satisfied by the input and output. I call this measure the liar- mony function H. Responses characterized by greater internal con- sistency - greater harmony - are “better.” (The definition of H is simple, but requires the formalism given in the appendix.) Given that better responses are characterized by greater har- mony, the obvious thing to do is hill-climb in harmony. However this method is apt to get stuck at local maxima that are not global maxima. We therefore temporarily relax our desire to go directly for the “best” response. Instead we consider a stochastic system that gives different responses with different probabilities: the better the response, the more likely the system is to give it. The degree of spread in the probability distribution is denoted T; for high values of T, the distribution is widely spread, with the better responses being only slightly more likely than Iess good ones; for low values of T, the best response is much more likely than the others. Thus T measures the “randomness” in the system; I call it the computational temperature. When we want only good responses, we must achieve low temperature. A general stochastic algorithm can be derived that realizes this probabilistic response; it provides a method for computer simulation. A parallel relaxation method is used to resolve the schema/inference decision problem in a way that involves no sophisticated control. The variables of the system Fe the activations of aI the schemas (1 = active, 0 = inactive) and the bits of missing informa- tion. The system starts with (a) all schemas inactive, and (b) com- pletely random guesses for all bits of missing information. Then randomly a schema or a bit of missing information is selected as the variable to be inspected; it will be now be assigned a (possibly new) value of 1 or 0. Using the current guesses for all the other variables, the harmony function is evaluated for the two possible states of the selected variable. These two numbers, H(1) and H(O), measure the overall consistency for the two cases where the selected variable is assigned 1 and 0; they can be computed because tentative guesses have been made for all the schema activations and missing information. Next a random choice of these 1,0 states is made, with the probability of the choices 1 and 0 having ratio $$. (Th e reason for the exponential function is given in the appendix.) This random process - pick a schema or bit of missing infor- mation; evaluate the two harmonies; pick a state - is iterated. In the theoretical limit that the process continues indefinitely, it can be proved that the probability that the system gives any response is proportional to eHfl, where H is the harmony of that response. This probability distribution satisfies the qualitative description of response probabilities we set out to realize. Cooling the System In this algorithm, each schema activation or bit of missing information is determined randomly; most likely the value with higher harmony is chosen, but sometimes not. Thus most of the time the changes raise the harmony, but not always. The higher the temperature, the more random are the decisions, that is, the more often the changes go “downhill.” Thus this algorithm, unlike strict hill-climbing, does not get stuck at local maxima. However, there is a tradeoff. The higher the temperature, the faster the sys- tern escapes local maxima by going downhill; but the higher the temperature, the more random is the motion and the more of the time the system spends in states of low harmony. EventuaIIy, to be quite sure the system’s response has high harmony, the tem- perature must be low. As the computation proceeds, the optimization point of this tradeoff shifts. InitiaIIy, the guesses for the missing information are completely random, so the information on which schemas are determining their relevance is unreliable. It is therefore desire- able to have considerable randomness in the decision making, i.e. high temperature. Even at high temperature, however, the system is more likely to occupy states of higher harmony than lower, so the guesses become more reliable than their completely random start. At this point it makes sense for the schemas to bc somewhat less random in their activity decisions, so the temperature should be lowered a bit. This causes the system to spend more of its time in states of higher harmony, justifying a further decrease in tem- perature. And so the temperature should be gradually lowered to achieve the desired final condition of low temperature. (A centraI concern of future work is analysis of how to regulate the cooling of the system.) As the computation proceeds and the temperature drops, the system’s initially rough and scattered response becomes progres- sively more accurate and consistent. This is just the kind of com- putation typical in people and just the kind needed in any large parallel system, where input from the others. each subsystem needs a constant stream of Cognitive Crystakation As computation proceeds, does accuracy increase slowly and steadily, or does the system undergo sudden and dramatic changes in behavior, as do physical systems when they are cooled past criti- cal temperatures marking phase transitions? This question has been addressed both through computer simulation and analytic approximation of a two-choice decision. The system has two sche- mas representing conflicting interpretations of the environment. The approximate theory allows computation of the probabilities of various completions given an input that partly describes the environment. It is useful to ask, what is the completion of a com- pletely ambiguous input? For high temperatures, as one might expect, the completions form random mixtures of the two interpretations; the system does not choose either interpretation. However, below a certain “freezing temperature: the system adopts one of the interpretations. Each interpretation is equally likely to be selected. The computer simulation approximates this behavior; below the freezing point, it dips back and forth between the two interpretations, occupying each for a long time. Slowly vacillating interpretation of genuinely ambiguous input is a familiar but not particularly important feature of human cognition. What is significant here is that even when the input pro- vides no help whatever in selecting an interpretation, the system even- tually (when cooled sufficiently) abandons meaningless mixtures of interpretations and adopts some coherent interpretation. A robust tendency to form coherent interpretations is important both for modelling human cognition and for building intelligent machines. The above analysis suggests that in processing typical inputs, which 379 are at most partially ambiguous, as processing continues and the temperature drops, the system wanders randomly through an ever-narrowing range of approximate solutions until some time when the system freezes into an answer. Schema Selection Having discussed how schemas are used to do completions, it is time to consider what set of schemas ought to be used to represent the regularities in a given environment. Suppose the sys- tern experiences a set of states of the environment and from these it must choose its schemas. Call these states the training set. Since the schemas are used to try to construct high-harmony responses, a reasonable criterion would seem to be: tk best schemas are those that permit tk greatest total harmony for responses to tk training set. I call this the training harmony criterion. I will not here dis- cuss an algorithm for finding the best schemas. Instead, I shall present some elementary but non-obvious implications of this very simple criterion. To explore these implications, we create various idealized environments displaying interesting modularity. Choos- ing a training set from within this environment, we see whether the training harmony criterion allows the system to induce tk modularity from the training set, by choosing schemas that encode the modularity. Perceptoal Grouping We perceive sceneS not as wholes, nor as vast collections of visual features, but as collections of objects. Is there some general characterization of what is natural about the particular levels of grouping that form these “objects”? This question can be addressed at a simple but abstract level by considering an environ- ment of strings of four letters in some fixed font. In this idealized environment, the modules (“objects”) are letter tokens, and in various states of the environment they recur exactly: each letter always appears in exactly the same form. The location of each of the four letters is absolutely fixed; I call the location of the first letter the “first slot,” and so forth. The environment consists of aI combinations of four letters. Now consider a computer given a subset of these four-letter strings as training; call these the training words. The image the computer gets of each training word is just a string of bits, each bit representing whether some portion of the image is on or off. The machine does not know that “bit 42” and “bit 67” represent adjacent places; a11 bits are spatially meaningless. Could the machine possibly induce from the training that certain bits “go together” in determining the “first letter”, say? (These, we know, represent the first slot.) Is there some sense in which schemas for the letter A in the this environment? PIY first slot, and so on, are natural encodings of As an obvious alternative, for example, the system could sim- create one schema for each training word. Or it could create a - schema for each bit in the image. These are the two extreme cases of maximally big and small schemas; the letter schemas fall some- where in between. Which of these three cases is best? Tk training harmony criterion implies that letter schemas are best, pro- vided the training set and number of bits per letter are not too small. This result can be abstracted from the reading context in which it was presented for expository convenience; the mathemati- cal result does not depend upon the interpretation we place upon the modules with which it deals. Thus the result can be character- ized more abstractly: Natural schemas encoding tk modules of an environment are inducible by tk training harmony criterion, provided the modules recur exactly. This investigation must now be extended to cases in which the recurrence of the modules is in some sense approximate. Roles vs. Instances When should experience be encoded as a list of instances and when as a collection of rules? To address this issue we con- sider two environments that are special subsets of the four-Ietter environment considered above: Environment R FAMB VEiUB SIMB ZOMB FAND VEND SIND ZOND FARP VERP SIRP ZORP FALT VELT SILT ZOLT Environment I FARB VAMP SALT ZAND FENP VELB SEiUD ZERT FIMT VIRD SINB ZILP FOLD VONT SORP ZOMB In the highly regular environment R, there are strict rules such as “F is always followed by A”; in the irregular environment I, no such rules exist. Note that here, schemas for the “rules” of environment R are just digraph schemas: FA-, VE-, . . . . -RP. -LT; schemas for “instances” are whole-word schemas. One might hope that a criterion for schema selection would dictate that environment R be encoded in digraph schemas representing the rules while environment I be encoded in word schemas representing instances. Tk training harmony criterion implies that for the regular environment, digraph sckmas are better than word schemas; for tk irregular environment, it is tk reverse. (In each case the entire environment is taken as the training set.) Higher Level Analyses The framework described here is capable of addressing more sophisticated learning issues. In particular, it is well-suited to analyzing the construction of higher-level representations and con- sidering, for example, the value of hierarchical organization (which is not put into the system, but may come out in appropri- ate environments.) In addition to addressing issues of schema selection at the more perceptual levels considered here, the frame- work can be employed at higher conceptual levels. The selection of temporal scripts, for example, can be considered, by taking the environment to be a collection of temporally extended episodes. The simulation method described here for systems with given schemas can also be applied at higher levels; it is being explored, for example, for use in text comprehension. ACKNOWLEDGEMENTS I am indebted to George Mandler and Donald Norman for the opportunity to pursue this work. Geoffrey Hinton, James McClelland, David Rumelhart, and the members of the UCSD 380 Parallel Distributed Processing research group have contributed enormously to the perspective on cognition taken in this work. I am grateful to Francis Crick, Mary Ann McCrary, Michael Mozer and Donald Norman for extremely helpful comments on the paper. Above all, warm thanks go to Douglas Hofstadter for his continual encouragement; the inguence of his ideas pervades this research. APPENDIX: The Formal Framework of Harmony Theory In the following general discussion, the specifics of the four- letter grouping environment discussed in the text are presented in parentheses. The possible beliefs B of the cognitive system about the current state of the environment forms a space 5. This space is assumed to have a set p of binary coordinates p; every belief B is defined by a set of bits B(p) in {+1,-l}, one for each p s p. (Each p represents a distinct pixel. B(p) is +l if the system believes p is on, -1 if off. This belief can come from inference or input.) An input I to the system is a set of binary values I(p) for some of the p E up; an output is a set of binary values 0 (p) for the remaining p. Together, I and 0 form a complete belief state B, a completion of 1. The probability of any response (A,B) is a monotonically ,increasing function f of its harmony H (A ,B). f is constrained by the following observations. If p1 and pz are not connected - even indirectly - through the knowledge base IJ, then inferences about p1 should be statistically independent of those about p2. In that case, H is the sum of the harmony contributed by three sets of schemas: those connected to pl, those connected to p2. and those connected to neither. The desired statistical independence requires that the individual probabiIities of responses for p1 and p2 multiply together. Thus f must be a function that takes additive harmonies into multiplicative probabilities. The only continuous functions that do this are the exponential functions, a class that can be parametrized by a single parameter T ; thus prob(A J?) = n en(Aa)/r The normalization constant n is chosen so that the probabilities for all possible responses add up to one. T must be positive in order that the probabihty increase with H . As T approaches zero, the function f approaches the discontinuous function that assigns equal nonxero probability to maximal harmony responses and zero probability to all others. A schema S is defined by a value S(p) CE { +l,-1,0} for each p e p. (If pixel p does not lie in the first slot, the schema S for “the letter A in the first slot” has S(p) = 0. If p does lie in the first slot, then S(p) is + 1 according to whether the pixel is on or off.) If for some particular p, S (p)# 0, then that p is an argument of S; the number of arguments of S is denoted I S I. (For a word schema, every p is an argument.) Schemas are used by the system to infer likely states of the environment. For a given environment, some schemas should be absent, others present, with possibly varying relative strengths corresponding to varying likelihood in the environment. A knowledge base for the system is a function u that defines the rela- tive strengths of all possible schemas: u(S) zz 0 and a(S) = 1. (The knowledge bases relevant to the text have all a(S) = 0 except for the schemas in some set S - like letters - for which the strengths are a11 equal. These strengths, then, are all l/#S, the inverse of the number of schemas in the set.) A response of the system to an input I is a pair (A a), where B is a completion of I, and A defines the schema activations: A(S) E {O,l} for each schema S . A harmony function H is a function that assigns a real number H,(A,B) to a response (A,B), given a knowledge base u, and obeys certain properties to be discussed elsewhere. A particular exemplar is H,(AJ) = ~4)A(S) B(p)S(p) - KI S I s I Here K is a constant in the interval [O,l]; it regulates what propor- tion E of a schema’s arguments must disagree with the beliefs in order for the harmony resulting from activating that schema to be negative: e = %.(l-K). In the following we assume z to be small and positive. (The terms without K make H simply the sum over all active schemas of the strength of the schema times the number of bits of belief (pixels) that are consistent with the schema minus the number that are inconsistent. Then each active schema incurs a cost proportional to its number of arguments, where K is the constant of proportionality.) Considerations similar to those of the previous paragraph lead in thermal physics to an isomorphic formula (the Boltzmann law) relating the probability of a physical state to its energy. The randomness parameter there is ordinary temperature, and I there- fore caI1 T the computational temperature of the cognitive system. The probability of a completion B is simply the probability of all possible responses (A,B) that combine the beliefs B with schema activations A: prob(B) = n CA eH(Aa)/r. Monte carlo analyses of systems in thermal physics have pro- ven quite successful (Binder, 1976). Starting from the exponential probabiIity distribution given above, the stochastic process described in the text can be derived (as in Smolensky, 1981). The above formula for H leads to a simple form for the stochastic decision aigorithm of the text that supports the following interpre- tation. The variables can be represented as a network of “p nodes” each carrying value OCp), “S nodes” each carrying value A(S), and undirected links from each p to each S carrying label a(S) - S@) (links labelled zero can be omitted). The nodes represent stochastic processors running in parallel, continually transmitting their values over the links and asynchronously setting their values, each using only the IabeIs on the links attached to it and the vaIues from other nodes transmitted over those links. This representation makes contact with the neurally-inspired “con- nectivist” approach to cognition and parallel computation (Hinton and Anderson, 1981; McClelland and Rumelhart, 1981; Rumelhart and McClelland, 1982). Independently of the development of har- mony theory, Hinton and Sejnowski (1983) developed a closely related approach to stochastic parallel networks following (Hop field, 1982) and (Kirkpatrick, Gelatt, and Vecchi, 1983). From a nonconnectionist artificial intelligence perspective, Hofs- tadter (1983) is pursuing a related approach to perceptual group ing; his ideas have been inspirational for my work (Hofstadter, 1979). An environment for the cognitive system is a probability dis- tribution on 5. (All patterns of on/off pixels corresponding to sequences of four letters are equally probable; all other patterns have probability zero.) A training set T from this environment is a sample of points T drawn from the distribution. (Each T is a 381 training word.) A responw A of the system to the set T is a specification of schema activations A(T) for each T. The training harmony of such a response is BI,(A,T) = CT H,(A(T),T). The maximum of H,(A,T) over all responses A is H,(T), the training harmony permitted by the knowledge base u . Each of the sets of schemas considered in the text (Ietters, digraphs, . ..) tile the training set T, in that each T agrees exactly with schemas that have nonoverIapping arguments and that together cover all of T. AII results cited in the text folIow from this elementary calculation: If the knowledge buw cr consists of a set of schemas S tlrat tile T, then H,(T) = const./#S where const. is u con.rtunt Ior a given T. Thus given a single training set T and two tilings of T with sets of schemas, the set with fewer schemas permits greater training harmony, and is preferred by the training harmony criterion. If the number of letters in the alphabet u is smaller than the number of pixels per letter, letters are a better encoding than pixels; if the number of training words exceeds 4a then letters are better than words. The number of word schemas needed in the restricted environments R and I is 16; the number of digraphs needed for R is 8 while for I it is 96. The calculation cited in the previous paragraph is quite sim- ple. Recall that if the proportion of a schema’s arguments that disagree with the beliefs exceeds Q, then the harmony resulting from activating that schema is negative. Thus if E is chosen smaI1 enough (which we assume), then for any given training word T, only those schemas that match exactly can contribute positive har- mony. In the response A(T) that maximizes the harmony, there- fore, only exactly matching schemas will be active; the others will be inactive, contributing zero harmony. Since the schemas S tile T, for each pixel p in T there is exactIy one active schema with p as an argument, and the value B(p) of the pixel is consistent with that schema, so B(p )S(p) = 1. Thus ~~p)~[B(pp@) - d s(p)11 = #p(i-K) = #p-26 s P Because a(S) is a constant for a11 S c S, the harmony H,(A(T),T) is simply the previous expression times a(S) = l/# S, or &# p/#S. Since this quantity is identical for aI1 training words T, summing over aI1 T in the training set T just multiplies this by the size of T, giving 2e# p# T/# S as the training harmony permit- ted by u: REFERENCES Binder, K. “Monte Carlo Investigations of Phase Transitions and Critical Phenomena.” In Domb, C. and M. S. Green (&Is.), Phuse Transitions and Critical Phenomena, vol. Sb. New York: Academic Press, 1976. Hopfield, J. J., “Neural networks and physical systems with emer- gent cohective computational abilities.” Proc. National Academy of Sciences USA 79 (1982) 2554-2558. Hinton, G. E. and J. A. Anderson (Eds.). Purullel Models of Asso- ciative Memory. HiIIsdaIe, NJ: Lawrence ErIbaum Associates, 1981. Hinton, G. E. and T. J. Sejnowski, “Analyzing Cooperative Com- putation” in Proceedings of the Fifth Annul Coqference of the Cognitive Science Society. Rochester, New York, May, 1983. Hinton, G. E. and T. J. Sejnowski, “Optimal Perceptual Inference,” to appear in Proc. IEEE Conference on Computer Vision and Pattern Recognition. Washington, D.C., June, 1983. Hofstadter, D. R. Godel, Escher, Bach: an Eternal Golden New York: Basic Books, 1979. Chapters X, XVI and XIX. Hofstadter, D. R., “The Architecture of Jumbo,” to appear in Proc. Machine Learning Workshop, Illinois, June, 1983. Kirkpatrick, S., C. D. Gelatt and M. P. Vecchi, “Optimization by Simulated Annealing.” Science 220:4598 (1983) 671-680. McClelland, J. L. and D. E. Rumelhart, “An interactive activation model of context effects in letter perception, Part 1: An account of the basic findings.” Psychological Review 88 (1981) 375407. Rumelhart, D. E. and J. L. McClelland, “An interactive activation model of context effects in letter perception, Part 2: The contex- tual enhancement effect and some tests and extensions of the model.” Psychological Review 89 (1982) 60-94. Smolensky, P. Lattice Renormalization of 44 Theory. dissertation Physics Dept ., Indiana University, 1981. Doctoral H,(T) = const ./# S where const. = &# p# T, which is constant for a given T.
1983
65
261
HUMAN PROCEDURAL SKILL ACQUISITION: THEORY, MODEL AND PSYCHOLOGICAL VALIDATION” Kurt VanLehn Xerox Part 3333 Coyote Hill Rd. Palo Alto, CA 94304 Abstract It is widely held that ordinary natural language conversations are governed by tacit conventions. called felicity conditions or conversational postulates (Austin. 1962: Grice. 1975: Gordon & Lakoff, 1975). Learning a procedural skill is also a communication act. The teacher communicates a procedure to the student over the course of several lessons. The central idea of the theory to be presented is that there are specific felicity conditions that govern learning. In particular, five newly discovered felicity conditions govern the kind of skill acquisition studied here. The theory has been embedded in a learning model. a large Al-based computer program. The model’s performance has been compared to data from several thousand students learning ordinary mathematical procedures: subtracting multidigit numbers, adding fractions, and solving simple algebraic equations. A key criterion for the theory is that the set of procedures that the model learns should exactly match the set of procedures that students actually acquire, including their “buggy” procedures. However, much more is needed for psychological validation of this theory, or any complex Al-based theory, than merely testing its predictions. The method used with this theory is presented. Introduction This paper summarizes research reported in a much longer document (VanLehn, 1983). It is intended only to convince the reader that the research problem is interesting and that the approach that was taken to solving it is worth taking. The goal of this research is a psychologically valid theory of how people learn certain procedural skills. There are other Al-based theories of skill acquisition (Anderson, 1982; Anzai & Simon, 1979; Newell & Rosenbloom, 1981). However, their objectives differ from the ones pursued here. They concentrate on know/edge compilation: the transformation of slow, stumbling performance into performance that is “faster and more judicious in choice” (Anderson, 1982, pg. 404). They study skills that are taught in a simple way: first the task is explained, then it is practiced until proficiency is attained. The research presented here studies skills that are taught in a more complex way: the instruction is a lesson sequence, where each lesson consists of explanation and practice. This shifts the central focus away from practice effects (knowledge compilation) and towards a kind of student cognition that could be called know/edge integration: the construction of a procedural skill from lessons on its subskills. This study puts more emphasis on the teacher’s role than the knowledge compilation research does. It is not the case * This research was supported by the Personnel and Training Research Programs. Psychological Sciences Division, Office of Naval Research, under contract number N00014-82C-0067, contract authority identification number NR667-477. that multi-lesson skill acquisition occurs with just any lesson sequence. Rather, the lesson sequences are designed by the teacher to facilitate knowledge integration. Knowledge integration. in turn. is “designed” to work only with certain kinds of lesson sequences. So. what is really being studied is a teacher-student system that has both cognitive and cultural aspects. It might be more appropriate to label the central focus of this research know/edge communication: the transmission of a procedural skill via lessons on its subskills. The skills chosen for the present investigation are ordinary written mathematical calculations. The main advantage of mathematical procedures, from a psychological point of view. is that they are virtually meaningless for the learner. They seem as isolated from common sense intuitions as the nonsense syllables of early learning research. In the case of the subtraction procedure, for example, most elementary school students have only a dim conception of its underlying semantics, which is rooted in the base-ten representation of numbers. When compared to the procedures they use to operate vending machines or play games, the subtraction procedure is as dry, formal and disconnected from everyday interests as a nonsense syllable is different from a real word. This isolation is the bane of teachers, but a boon to psychologists. It allows psychologists to study a skill that is much more complex than recalling nonsense syllables, and yet it avoids bringing in a whole world’s worth of associations. It is worth a moment to review how mathematical procedures are taught. In the case of subtraction, there are about ten lessons in a typical lesson sequence. The lessons introduce the procedure incrementally, one step per lesson, so to speak. For instance, the first lesson might show how to do subtraction of two-column problems. The second demonstrates three-column problem solving. The third introduces borrowing, and so on. The ten lessons are spread over about three years, starting in the second grade. They are interleaved with review lessons and lessons on many other topics. In the classroom, a typical lesson lasts an hour. The teacher solves some problems on the board with the class, then the students solve problems on their own. If they need help. they ask the teacher, or they refer to worked examples in the textbook. A textbook example consists of a sequence of captioned “shapshots” of a problem being solved, e.g., Take a ten to make 10 ones. ;I6 -19 Subtract the ones. ;15 -19 6 Subtract the tens. i’6 - 19 16 Textbooks have very little texr explaining the procedure (young children do not read well). Textbooks contain mostly examples and exercises. From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Math bugs reveal the learning process It will be assumed that the teacher and the student somehow build a knowledge structure of some kind in the student’s mind, and that this knowledge structure somehow directs the student’s problem solving efforts. A critical experimental task is to determine what that knowledge structure is. This is difficult if all the experimenter does is watch the student solve problems. There are many ways for the experimenter to get a better view, e.g., analyzing verbal protocols, measuring the latencies between writing actions, tracking eye movements, and so on. The technique used in this study is somewhat unusual: Students are given problems to solve that are beyond their current training. For instance, a subtraction student who has not yet been taught how to borrow is given problems which require borrowing, such as 43- 18. When the student applies his current knowledge structure to solve such problems, it “breaks” in certain ways. The experimenter can infer much about the student’s knowledge structure by analyzing his struggle to adapt it to solve the problem at hand. Metaphorically speaking, such testing is like snapping a bar of metal and examining the fracture under a microscope in order to find out the metal’s crystalline structure. Tools have been built for doing such detailed analyses. John Seely Brown and Richard Burton developed computer systems (Buggy and Debuggy) that automatically analyze a student’s errors into one or more bugs (Brown & Burton, 1978; Burton, 1981). Bugs serve as a precise, succinct representation of errorful problem solving behavior. Bugs are the kind of data on which the theory rests. Many different kinds of bugs have been observed (77 for subtraction alone; Vanlehn, 1982). Until recently, most bugs defied systematic explanation. As an illustration, consider a common bug among subtraction students: the student always borrows from the leftmost column in the problem no matter which column originates the borrowing. Problem a below shows the correct placement of borrow’s decrement. Problem &I shows the bug’s placement. 5 2 5 a. 3 6% b. 3 6% C. 6’6 -109 -109 -19 256 166 46 (The small numbers represent the student’s scratch marks.) This bug has been observed for years (cf. Buswell, 1926, pg. 173, bad habit number s27), but no one has offered an explanation for why students have it. The theory offers the following explanation, which is based on the hypothesis that students use induction (generaliZatiOn from examples) to learn where to place the borrow’s decrement. Every subtraction curriculum that I know of introduces borrowing using only two-column problems, such as problem c above. Multi-column problems, such as a, are not used. Consequently, the student has insufficient information for unambiguously inducing where to place borrow’s decrement. The correct placement is In the left-adjacent column, as in a. However. two-column examples are also consistent with decrementing the leftmost column, as in b. If the student chooses the leftmost-column generalization, the student acquires the bug rather than the correct procedure. According to this explanation, the cause of the bug is twofold: (1) insufficiently variegated instruction, and (2) an unlucky choice by the student. The bugs that students exhibit are important data for developing the theory. Equally important are bugs that students don’t exhibit. When there are strong reasons to believe that a bug will never occur, it is called a star bug (after the linguistic convention of placing a star before sentences that native speakers would never utter naturally). Star bugs, and star data in general. are not as objectively attainable as ordinary data (VanLehn, Brown & Greeno, in press). But they are quite useful. To see this, consider again the students who are taught borrowing on two column problems, such as problem c above. In two-column problems, the borrow’s decrement is always in the tens column. Hence tens-column is an inductively valid description of where to decrement. However, choosing tens- column for the decrement’s description predicts that the student would place the decrement in the tens column regardless of where the borrow originates. such as d and e below: This leads to strange solutions, 5 15 ct. 1'6 6 6 e. 3l6 6 - 910 -190 1655 265 This kind of problem solving has never been observed to my knowledge. In the opinion of several expert diagnosticians, it never will be observed. Always decrementing the tens column is a star bug. The theory should not predict its occurrence. This has important implications for the theory. The theory must explain why certain inductively valid abstractions (e.g., leftmost column) are used by students while certain others (e.g., tens column) are not. These examples have illustrated the nature of the research project: trying to understand certain aspects of skill acquisition (i.e., knowledge integration, knowledge communication) by studying bugs. The next section outlines the theory. Step theory, repair theory and felicity conditions For historical and other reasons, it is best to view the theory as an integration of two theories. Slep theory describes how students acquire procedures from instruction. Repair theory describes how students barge through problems situations where their procedure has reached an impasse.* The two theories share the same representations of knowledge and much else. I will continue to refer to them together as “the theory.” Repair theory is based on the insight that students do not treat procedures as hard and fast algorithms. If they are unsuccessful in an attempt to apply a procedure to a problem, they are not apt to just quit, as a computer program does. Instead, they will be inventive, invoking certain general purpose tactics to change their current process state in such a way that they can continue the procedure. These tactics are simple ones, such as skipping an operation that can’t be performed or backing up in the procedure and taking another path. Such local problem solving tactics are called repairs because they fix the problem of being stuck. They do not fix the underlying cause of the impasse. Given a similar exercise later, the student will reach the same impasse. On this occasion, the student might apply a different repair. This shifting among repairs has been observed in the data. It is one kind of bug migration: the phenomenon of a student shifting from one bug (systematic error) to another over a short period of time. Step theory is based on the insight that classroom learning is like a conversation in that there are certain implicit *John Seely Brown originated repair theory (Brown & VanLehn 1980). The present version remains true to the insights of the original version although most of the details have changed. 421 conventional expectations, called felicity conditions, that facilitate information transmission. Basically, students expect that the teacher will introduce just one new “chunk” of the procedure per lesson, and that such “chunks” will be “simple” in certain ways. Although students do not have strong expectations about what procedures will be taught, they have strong expectations about how procedures will be taught. Step theory takes its name from a slogan that expresses the students’ expectations: procedures are taught “one simple step at a time.” Several felicity conditions have been discovered: (1) Students expect a lesson to introduce at most one new “chunk” of procedure. Such chunks are called subprocedures. This felicity condition will be described in more detail in a moment. (2) Students add their new subprocedure to their current procedure rather than replacing large parts of it. That is, they expect the lesson to augment their procedure rather than making parts of it obsolete. (3) Students induce their new subprocedure from examples and exercises. That is, students expect the lesson’s material to correctly exemplify the lesson’s target subprocedure. (4) The students expect the lesson to “show all the work” of the target subprocedure. Even if the target subprocedure will ultimately involve holding some intermediate result mentally, the first lesson will write the intermedia?e result down. In a later lesson, the student is taught to omit the extra writing by holding the intermediate result mentally. This makes it possible for students use a simple form of reasoning to induce the relationships of the intermediate result to other parts of the procedure. The last felicity condition, called the show-work principle, is a clear illustration of the nature of felicity conditions in general. Textbook authors probably do not consciously realize that the lessons they write obey the show-work principle. They strive only to make the lessons simple and effective. In doing so, they wind up obeying the principle. This occurs because the problem that the felicity condition solves is inherent in any inductive learning task. Effective learning requires its SOhtiOn. The show-work felicity condition is one solution, the one used in this domain, The general point is this: Felicity conditions are conventions that have been tacitly adopted by our culture in Order to make it easier for students to solve certain inherent problems in knowledge communication. A competitive argument The Previous section made some assertions about human skill acquisition. Making hypotheses is only part of developing a theory. The other part is validating those hypotheses. An important validation technique used with this theory is coWetitiVe argumentation (VanLehn, Brown & Greene, in Press). Most competitive arguments have a certain “king of the mountain” form. One shows that a hypothesis accounts for certain facts, and that certain alternatives to the hypothesis, while perhaps not without empirical merit, are flawed in some way. That is, the argument shows that its hypothesis stands at the tOP of a mountain of evidence, then proceeds to knock the competitors down. This section presents an example of a competitive argument. Consider the first felicity condition listed a moment ago. A more precise statement of it is: Learning a lesSOn introduces at most one new disjunction into a procedure. ln procedures, a disjunction may take many forms, e.g., a conditional branch (if-then-else). This felicity condition asserts that learners will only learn a conditional if each branch of the conditional is taught in a separate lesson-i.e., the then-part in one lesson, and the else-part in another. The argument for the felicity condition hinges on an independently motivated hypothesis: mathematical procedures are learned inductively. They are generalized from examples. There is an important philosophical-logical theorem concerning induction: lf a generalization (a procedure, in this case) is allowed to have arbitrarily many disjunctions, then an inductive learner can identify which generalization it is being taught only if it is given all possible examples, both positive and negative. This is physically impossible in most interesting domains, including this one. If inductive learning is to bear even a remote resemblance to human learning, disjunctions must be constrained. Disjunctions are one of the inherent problems of knowledge communication that were mentioned a moment ago, TWO classic methods of constraining disjunctions are (i) t0 bar disjunctions from generalizations, and (ii) to bias the learner in favor of generalizations with the fewest disjunctions. The felicity condition is a new method. It uses extra input information. the lesson boundaries, to control disjunction. Thus, there are three competing hypotheses for explaining how human learners control disjunction (along with several other hypotheses that won’t be mentioned here): (i) no-disjunctions, (ii) fewest- disjunctions, and (iii) one-disjunction-per-lesson. Competitive argumentation involves evaluating the entailments of each of the three hypotheses. It can be shown that the first hypothesis should be rejected because it forces the theory to make absurd assumptions about the student’s initial set of concepts-the primitive concepts from which procedures are built. The empirical predictions of the other two hypotheses are identical, given the lesson sequences that Occur in the data. More subtle arguments are needed to differentiate between them. Here are two: (1) The one-disjunction-per-lesson hypothesis explains why lesson sequences have the structure that they do. Ii the fewest-disjunctions hypothesis were true, then it would simply be an accident that lesson boundaries fall exactly where disjunctions were being introduced. The one-disjunction-per- lesson hypothesis explains a fact (lesson structure) that the fewest-disjunctions hypothesis does not explain. (2) The fewest-disjunctions hypothesis predicts that students would learn equally well from a “scrambled” lesson sequence. To form a scrambled lesson sequence, all the examples in an existing lesson sequence are randomly ordered then chopped up into hour-long lessons. Thus, the lesson boundaries fall at arbitrary points. The fewest-disjunctions hypothesis predicts that the bugs that students acquire from a scrambled lesson sequence would be the same as the bugs they acquire from the unscrambled lesson sequence. This empirical prediction needs checking. If it is false, as I am sure it is, then the fewest-disjunctions hypothesis can be rejected on empirical as well as explanatory grounds. This brief argument sketched the kind of individual support that each of the theory’s hypotheses has been given. Such competitive argumentation seems essential for demonstrating the psychological validity of a theory of this complexity. Six components are involved in validation The preceding sections indicated the kind of skill acquisition under study, sketched a few hypotheses about it, and discussed the validation method. This section summarizes the research project by listing its main components. (1) Learning mode/. The first component is a learning model: a large, Al-based computer program. Its input is a lesson sequence. Its output is the set of bugs that are predicted to occur among students taking the curriculum represented by the given lesson sequence. The program, named Sierra, has three main parts: (i) The /earner learns procedures from lessons. (ii) The solver applies a procedure to solve test problems. (iii) The diagnostician analyzes the solver’s answers to see which bugs. if any, have been generated. The diagnostician is a part of Burton’s Debuggy system (Burton, 1981). The solver is a revised version of the one used to develop repair theory (Brown & VanLehn, 1980). The learner is similar to other Al programs that learn procedures inductively. For instance, ALEX (Neves. 1981) learns procedures for solving algebraic equations given examples similar to ones appearing in algebra textbooks. LEX (Mitchell et. al., 1983) starts with a trial- and-error procedure for solving integrals and evolves a more efficient procedure as it solves practice exercises. Sierra’s learner is similar to LEX and ALEX in some ways (e.g., it uses disjunction-free induction). It differs in other ways (e.g., it uses lesson boundaries crucially, while the instruction input to ALEX and LEX is a homogeneous sequence of examples and exercises). As a piece of Al, Sierra’s learner is a modest contribution. Of course, the goal of this research is not to formulate new ways that Al programs can learn. (2) Data from human learning. The data used to test the theory come from several sources: the Buggy studies of 2463 students learning to subtract multidigit numbers (Brown & Burton, 1978; VanLehn, 1982), a study of 500 students learning to add fractions (Tatsuoka & Baille, 1983), and various studies of algebra errors (Greeno, 1982; Wenger, 1983). The raw data are worksheets and/or protocols from students taking diagnostic tests. Their answers and scratch work are analyzed in terms of bugs. In the Buggy studies, the analysis is automated. In the others, it is done by hand. The bugs are what are actually used to test the theory. (3) A comparison of the model’s predictions to the data. The major empirical criterion for the theory is observational adequacy: (i) the model should generate all the correct and buggy procedures that human learners exhibit, and (ii) the model should not generate procedures that learners do not acquire, i.e., star bugs. Although observational adequacy is a standard criterion for generative theories of natural language syntax, this is the first Al learning theory to use it. (4) A set of hypotheses. Until recently, most Al-based theories of cognition used only the three components fisted SO far: a model. some data. and a comparison of some kind. Such theories leave one to accept or reject the model in toto. Such an “explanation” of intelligent human behavior amounts t0 substituting one black box, a complex COWXJter program, for another, the human mind. Recent work in automatic programming and program verification suggests a better way t0 use programs in cognitive theories: The theorist develops a Set of specifications for the model’s performance. These serve as the theory’s hypotheses about the cognition being modelled. The model becomes a tool for calculating the predictions made by the combined hypotheses. The present theory has 32 such hypotheses. The felicity conditions listed earlier are four Of the 32. (5) A demonstration that the model generates aIf and on/y the predictions allowed by the hypotheses. Such a demonstration is necessary to insure that the success or failure of the model’s predictions can be blamed on the theory’s hypotheses and not on the model’s implementation. Ideally, l would give a line-by-line proof that the model satisfies the hypothesis. This just isn’t practical for a program as complex as Sierra. However, what has been done is to design Sierra for transparency instead of efficiency. For instance, Sierra uses several generate-and-test loops where the tests are hypotheses of the theory. This is much less efficient than building the hypotheses into the generator.* But it lends credence to the claim that the model generates exactly the predictions allowed by the hypotheses. (6) A set of arguments, one for each hypothesis, that shows why the hypothesis should be in the theory, and what would happen if it were replaced by a competing hypothesis. This involves showing how each hypothesis, in the context Of its interactions with the others, increases observational adequacy, or reduces degrees of freedom, or improves the adequacy Of the theory in some other way. The objective is to analyze why these particular hypotheses produce an empirically SUCCeSSfUl theory. This comes out best in competitive argumentation. Each of the 32 hypotheses of the theory has survived a competitive argument. References Anderson, J.R. Acquisition of Cognitive Skill. Psychological Review. 1982,89, 369-406. Anzai. Y. & Simon, H.A. The theory of learning by doing. Psychological Review, 1979, 86, 124-140. Austin, J.L. How to do things with words. New York: Oxford University Press, 1962. Brown, J.S. & Burton, R.B. Diagnostic models for procedural bugs in basic mathematical skills. Cognitive Science, 1978,2, 15 192. Brown, J.S. & VanLehn, K. Repair Theory: A generative theory of bugs in procedural skills. Cognitive Science, 1980,4,379- 426. Burton, R.B. DEBUGGY: Diagnosis of errors in basic mathematical skills. In D. H. Sleeman & J. S. Brown (Eds.), intelligent tutoring systems. London: Academic Press, 1981. Buswell, G.T. Diagnostic studies in arithmetic. Chicago: University of Chicago Press, 1926. Gordon, D. & Lakoff, G. Conversational postulates. In D. Davidson & G. Harmon (Eds), Semantics of Natural Language. Dordrecht: Reidel Press, 1975. Grice, H.P. Logic and conversation. In D. Davidson & G. Harmon (Eds), Semantics of Natural Language. Dordrecht: Reidel Press, 1975. Greeno, J. Personal communication, 1982. Mitchell, T.M., Utgoff, P.E., & Banerji, R.B. Learning problem- solving heuristics by experimentation. In R.S. Michalski, T.M. Mitchell, & J. Carbonell, (Eds.), Machine Learning. Palo Alto. CA: Tioga, 1983. Newell, A. & Rosenbloom, P. Mechanisms of skill acquisition and the law of practice. In J.R. Anderson (Ed.) Cognitive ski//s and their acquisition. Hillsdale, NJ: Erlbaum, 1981. Neves, D.M. Learning procedures from examples, Pittsburgh, PA: Carnegie-Mellon University, Department of Psychology, unpublished doctoral dissertation, 1981. Tatsuoka. K. & Baille, R. Personal communication, 1983. VanLehn, K. Bugs are not enough: Empirical studies of bugs, impasses and repairs in procedural skills. Journal of Mathematical Behavior, 1982,3,3-72. VanLehn, K., Brown, J.S. & Greeno, J.G. Competitive argumentation in computational theories of cognition. In W. Kinsch, J. Miller & P. Polson (Eds.) Methods and Tactics in Cognitive Science. New York: Erlbaum, forthcoming. VanLehn, K. Felicity conditions for human skill acquisition: Validating an Al-based theory. Cambridge, MA: Massachusetts Institute of Technology, unpublished doctoral dissertation, 1983. Wenger, R. Personal communication, 1983. ‘lt takes Sierra about 100 hours of Dorado time to process a single subtraction lesson sequence. This style of research would be infeasible without fast Lisp machines, such as the Dorado. 423
1983
66
262
LEARNING PHYSICAL DESCRIPTIONS FROM FUNCTIONALDEFINITIONS, EXAMPLES,AND PRECEDENTS by Patrick H. Winston+ Thomas 0. BinfordS Boris Katz+ and Michael Lowrys Abstract It is too hard to tell vision systems what things look like. It is easier to talk about purpose and what things are for. Consequently, we want vision systems to use functional descriptions to identify things, when necessary, and we want them to learn physical descriptions for themselves, when possible, This paper describes a theory that explains how to make such systems work. The theory is a synthesis of two sets of ideas: ideas about learning from precedents and exercises developed at MIT and ideas about physical description developed at Stanford. The strength of the synthesis is illustrated by way of representative experiments. All of these experiments have been performed with an implemented system. Key Ideas It is t.oo hard to tell vision systems what things look like. It is easier to talk about purpose and what things are for. Consequently, we want vision systems to use functional descriptions to identify things, when necessary, and we want them to learn physical descriptions for themselves, when possible. For example, there are many kinds of cups: some have handles, some do not; some have smooth cylindrical bodies, some are fluted; some are made of porcelain, others are Styrofoam, and still others are metal. You could turn blue in the face describing all the physical possibilities. Functionally, however, all cups are things that are easy to drink from. Consequently, it is much easier to convey what cups are by saying what they are functionally. tpatrick H. Winston and Boris Katz are at the MIT Artificial Intelligence Laboratory. SThomas 0. Binford and Michael Lowry are at the Stanford University Artificial Intelligence Laboratory. To be more precise about what we are after, imagine that you are told cups are open vessels, standing stably, that you can lift. You see an object with a handle, an upward pointing concavity, and a flat bottom. You happen to know it is light. Because you already know something about bowls, bricks, and suitcases, you conclude that you are looking at a cup. You also create a physical model covering this particular cup type. Our first purpose, then, is to explain how physical identification can be done using functional definitions. Our second purpose is to show how to learn physical models using functional definitions and specific acts of identification. It is important to note that our theory of model learning involves a physical example and some precedents in addition to the functional definition: The physical example is essential, for otherwise there would be no way to know which precedents are relevant. The precedents are essential, for otherwise there would be no way to know which aspects of the physical example are relevant. We now proceed to explain our function-to-form theory and to illustrate the ideas using some examples that have been run through our implementation. WC begin by revicxwirig the sort of tasks pcrforrned by the syr:tcrn tli;~J, c~rnl~oclit~s the thcbory of Icarnirig by analogy arid constraint, transfer. For tlf:tails, see Winston [ 19811. o A natural language interface translates English sentences describing a precedent and a problem into links in a semantic net. The input English interface was conceived and written by Katz. For details, see Katz and Winston [1982]. e A matcher determines a correspondence bet,ween the parts of the precedent and the problem. Figure 1 illustrates. e An analogizcr determines if the questioned link in the problcrn is supported by the given links in the problem. To do this, the analogizer transfers the CAUSE links supplied by the precedent onto the problem. Figure 2 illustrates. 433 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Precedent Problem 1 Match Figure 1. The matcher determines part correspondence using the links that populate the precedent and the problem. The matcher pays particular attention to links that are enmeshed in the CAUSE structure of the precedent. Precedent Problem Figure 2. The analogizer transfers CAUSE constraints from the precedent to the problem. The problem is solved if links in the problem match the links carried a!ong with the transferred CAUSE structure. In this simple illustration, there is only one CAUSE link. This CAUSE link leads from the link to be shown to a link that already holds. --- l A rule generator constructs an if-then rule to capture that portion of the causal structure in the precedent that is ferreted out by the problem. The then part of the if-then rule comes from the questioned link in the p&lc~~. The if parts come from links identified by the transferred CATJSE structure. thus t,Ile rules look like this: Rule RULE-1 if link found using CAUSE structure link found using CAUSE structure then link to be shown to hold case names of all precedents used In more complicated situations, no single precedent can supply enough causal structure. Consequently, several preccdcrlts rrlust be strung tog&er. A new precedent is sought whenever there is a path in the transferred CAIJSI3 structure that does not 1c;ul to an already established link iI1 the probl~~m. The exarnplcs in this paper all use multiple precedents. The Synthesis In this section, we briefly describe the steps involved in our synthesis of the learning of ANALOGY and the physical representations of ACRONYM, an object modelling system based on generalized cylinders [Binford 1971, 1981, 1982; Brooks 19811. In the next section we illustrate the steps by explaining a particular scenario in which the identification and learning system recognizes one physical model of a cup, expressed in ACRONYM’s representation primitives, and then creates a model, in the form of an if-then rule, for that kind of cup. Here then are the steps in the synthesis: 1. Describe the thing to be recognized in functional terms. The functional description is given in English and translated into semantic net links. 2. Show a physical example. At the moment, the physical description is given in English, bypassing vision. The description is couched in the generalized-cylinder vocabulary of ACRONYM, however. Eventua.lly this description will come optionaliy from ACRONYM. 3. Enhance the physical example’s physical description. The basic physical description, produced either from English or from an image, occasionally requires English enhancement. English enhancement is required when there is a need to record physical properties such as ma.terial composition, weight, and articulation, which are not easily obtained from a vision system or a vision-system prosthesis. 4. Show that t,he functional requirements are met by the enhanced physical description, thus identifying the object. ANALOGY does this using precedents. Several precedents are usually necessary to show tha.t all of the functional requirements are met. 434 5. Create a physical model of the functionally-defined concept. ANAI,OC,Y creates a physical model in the form of an if-then rule whenever it successfully shows that a concept’s functional rcquiremcnt,s are met by a parlicular physical example. Sillcc functional rcquircmcnts usually can bc met in a nllmbtlr of ways, thrre may br a nurnbcr of physical models, each genc~ratod from a dif[iarcnt physica. example. Once if-tlicn b;~tl physicxl rr~oclf:l~ are learned, examples of the concept cari be rc:c:o!;riizcd dirccl,ly, without rcfcrcrice t80 furlcf,ioll;~l rc~cl”if.c,rrlc:rlt~s or to pr~c:tfc~nIs. Morclovcr, AC~IIONL’M (:iIrl 11s~ OIIC Ic;lrrlcd r)tlysical Irlotlclx tm Irlalte top-down predictions about what things will be seen so that bottom-up procedures can look for those things. Learning What an Ordinary Cup Looks Like Now let us walk through the steps of the learning process, showing how to learn what an ordinary cup looks like. The first step is to describe the cup concept in terms of functional qualities such as liftability, stability, and ability to serve as an accessible container. This is done by way of the following English: Let X be a definition. X is a definition of an object. The object is a cup because it is a stable liftable open-vessel. Remember X. Of course, other, more elaborate definitions are possible, but this one seems lo us LO be good enough for the purpose of illustrating our learning theory. The English is translated into the semantic net shown in figure 3. CUP object cause open-vessel stable liftable Figure 3. The functional definition of a cup. This semantic net is produced using an English description. AK0 = A Kind Of. The next step is to show an example of a cup, such as the one in figure 4. ACRONYM is capable of translating such visual information into the semantic net shown in figure 4. But inasmuch as our connection to ACRONYM is not complete, we currently bypass ACRONYM by using the following English instead. Let E be an exercise. E is an exercise about a red object. l’he object’s body is small. Its bottom is flat!. The object has a handle and an upward-pointing concavity. In contrast to the definition, the qualities involved in the dmmi~)lior~ Of the J);Il.1 ic*ular cll[b WC 211 p}iysic;ll qll;Jitjcs, JlOL f’llJlCLiOJl;Ll OJlflS. (Assurrlo ttlilt. all qtralitics irlvolving scales, like small size and light weight, are relative to the human body, by default, unless otherwise indicated.) In the next step, we enhance the physical example’s physical description. This enables us to specify physical properties and links that are not obtainable from vision. The object is light. Now it is time to show that the functional requirements are met by the enhanced physical description. To do this requires using precedents relating the cup’s functional descriptors to observed and stated physical descriptors. Three precedents are used. One indicates a way an object can be determined to be stable; another relates liftability to weight and having a handle; and still another explains what being an open-vessel means. All contain one thing that, is irrclevar1.t with respect to dealing with cups; these The object is light. irrelevant things are representative of the detritus that can accompany the useful material. Let X be a description. X is a description of a brick. The brick is stable because its bottom is flat. The brick is hard. Remember X. Let X be a description. X is a description of a suitcase. The suitcase is liftable because it is graspable and because it is light. The suitcase is graspable because it has a handle. The suitcase is useful because it is a portable container for clothes. Remember X. Let X be a description. X is a description of a bowl. The bowl is an open-vessel because it has an upward-pointing concavity. The bowl contains tomato soup. Remember X. With the functional definition in hand, together with relevant precedents, the analogy apparatus is ready to work as soon as it is stimulated by the following challenge: In E, show that the object may be a cup. This initiates a search for precedents relevant to showing something is a cup. The functional definition is retrieved. Next, a matcher determines the correspondence between parts of the exercise and the parts of the functional definition, a trivial task in this instance. Now the verifier overlays the cause links of the functional definition onto the exercise. Tracing through these overlayed cause links raises three questions: is the observed object stable, is it an open vessel, and is it liftable. All this is illustrated in figure 5. CUP Figure 5. The cause links of a functional description, acting as a precedent, are overlayed on the exercise, leading to other questioned links. Note that all of the exercise description is physical, albeit not all visual. Overlayed structure is dashed. Questioning if the object is liftable leads to a second search for a precedent, this time one that relates function to form, causing the suitcase description to be retrieved. The suitcase description, shown in figure 6, is matched to the exercise, its causal structure is overlayed on the exercise, and other questions are raised: is the observed object light and does it have a handle. Since it is light and does have a handle, the suitcase description suffices to deal with the liftable issue, leaving open the stability and open-vessel questions. Thus the suitcase precedent, in effect, has a rule of inference buried in it, along with perhaps a lot of other useless things with respect to our purpose, including the statcmcnt about why the suitcase itself is useful. The job of analogy, then, is to find and exploit such implicit rules of inference. has red 4 landle open-vessel stable liftable .’ t I_- - *graspable I ! w has 1 I 1 I 1 I I lis lakn *light JUUUI‘, + tJlfE----;dy Qz;\;;;;%.ng Figure 6. Cause links from the suitcase precedent are overlayed on the exercise, leading to questlons about whether the object is light and whether the object has a handle. Overlayed structure is dashed. Many links of the suitcase precedent are not shown to avoid clutter on the diagram. Checking out stability is done using the description of a brick. A brick is stable because it has a flat bottom. Similarly, to see if the object is an open-vessel, a bowl is used. A bowl is an open vessel because it has an upward- pointing concavity. Figure 7 illustrates. At this point, there is supporting evidence for the conclusion that the exercise object is a cup. Now we are ready for the final task, to build a physical model of the functionally defined concept. This is done by constructing an if-then rule from the links encountered in the problem-solving process: the questioned link goes to the then part; the links at the bottom of the transferred CAUSE structure go to the if part; and the intermediate 436 brick open-v+gel -bottom -- /;’ ?lRU, / - . Figure 7. The brick precedent and the bowl precedent establish that the object is stable and that it is an open vessel. The cause links of the precedents are overlayed on the exercise, leading to questioned links that are immediately resolved by the facts. Overlayed structure is dashed. Many h&s of the precedents are not shown to avoid clutter on the diagram. links of the transfcrrcd CAUSE structure go into the unless part.l The resuf t follows: ‘The unless part,s come from the links lying between those links supplying the if and f,tnarl parls of tliC- rult. Par a rule to al,ply, it ~rtust txa lhat t.hr>rc is no &rr:ct ~C~~SOI) 10 bc:licvc dliv lirlk in the rhlc’8 7bnless pitrt, as exphind in an wrlim paper IWinston 1982). 437 Rule RULE-l if [OBJECT-9 IS LIGHT] [OBJECT-9 HAS CONCAVITY-71 [OBJECT-g HAS HANDLE-41 [OBJECT-9 HAS BOTTOM-71 [CONCAVITY-7 AK0 CONCAVITY] [CONCAVITY-7 IS UPWARD-POINTING] [HANDLE-4 AK0 HANDLE] [BOTTOM-7 AK0 BOTTOM] [BOTTOM-7 IS FLAT] then [OBJECT-O AK0 CUP] unless [[OBJECT-9 AK0 OPEN-VESSEL] IS FALSE] [[OBJECT-9 IS LIFTABLE] IS FALSE] [[OBJECT-9 IS GRASPABLE] IS FALSE] [[OBJECT-O IS STABLE] IS FALSE] case DEFINITION-1 DESCRIPTION-2 DESCRIPTION-3 DESCRIPTION-l At first the unless conditions may seem strange, for if all the ordinary conditions hold and if the causal connections in the precedents represent certainties, then none of the unless conditions could trigger. However, the learning system assumes that the precedents’ cadsal connections indicate tendencies, rather than certainties. Consequently, from the learning system’s perspective, the unless conditions must appear. An earlier paper gives several examples where similar unless conditions are necessary [Winston 19821. Learning What a Styrofoam Cup Looks Like Styrofoam cups without handles are described by another rule t,Ilat is learned in the same way using the same functional description. The only differences is that liftability is handled by way of a flashlight precedent rather than by the suitcase precedent. Let X be a description. X is a description of a flaslllight. The flashlight is lift.able because its body is graspable and because the flashlight is light. The flashlight’s body is graspable because it is small and cylindrical. liemember X. Thus the learned rule is as follows: Rule RULE-2 if [OBJECT-IO IS LIGHT] [OBJECT-IO HAS Bow-91 [OBJECT-IO HAS coNc~vm-81 [OBJECT-lo HAS BOTTOM-81 [CONCAVITY-8 AK0 CONCAVITY] [CONCAVITY-8 IS UPWARD-POINTING] [BODY-9 AK0 BODY] [BODY-9 IS CYLINDRICAL] [BODY-~ Is mALLI [BOlTOh!- AK0 BOTTOM] [BOTTOM-8 IS FLAT] then [OBJECT-IO AK0 cup1 unless TTOBJECT-10 AK0 OPEN-VESSEL] IS FALSE] [[OBJECT-10 Is LIFTABLE] IS FALSE1 [[OBJECT-10 IS STABLE] IS FALSE1 [[BODY-S IS GRASPABLE] IS FALSE] case DEFINITION-1 DESCRIPTION-2 DESCRIPTION-4 DESCRIPTION-1 Recognizing Cups and Using Censors We now have two descriptions that enable direct recognition of cups. These can be used, for example, on the following descriptions, conveyed by ACRONYM or by the natural language interface: Let E be an exercise. E is an exercise about a light object. The object’s body is small. The object has a handle. The object’s bottom is flat. Its concavity is upward-pointing. Its contents are hot. In E show that the object may be a cup. Let E be an exercise. E is an exercise about a light object. The object’s bottom is flat. Its body is small and cylindrical. Its concavity is upward-pointing. Its contents are hot. Its body’s material is an insulator. In E show that the object may be a cup. For the first of these two exercises, the rule requiring a handle works immediately. It is immaterial that the contents of the cup are hot. For the second, the rule requiring a small, cylindrical body works immediately. Again it is irnmatcrial that the contents of the cup arc hot since nothing is kIlown about the links among coutcnt temperature, grilsl)ilbility, and insul;ltirlg rrl;lt,OriitlS. Proving sornc l~IloWlc~cl::c: ;ll~Ollt tllcse t,tlings by way of sor110 ccr~sorx ril;tkcs iclc~Illific~itt~iorl more interesting, Suppose, for example, that we teach or tell the machine that an object with hot contents will not have a graspable body, given no reason to doubt that the object’s body is hot. Further suppose that we teach or tell the machine that an object’s body is not hot, even if its contents are, if the body is made from an insulator. All this is captured by the following censor rules, each of which can make a simple physical deduction: Let Cl be a Censor. Cl is a censor about an object. The object’s body is not graspable because its contents are hot unless its body is not hot. Make Cl a censor using the object’s body is not graspable. Let C2 be a censor. C2 is a censor about an object. The object’s contents are hot. Its body is not hot because its body’s material is an insulator. Make C2 a censor using the object’s body is not hot. Repeating the second exercise now evokes the following scenario: Asking whether the object is a cup activates the rule about CUPS without handles. The if conditions of the rule 438 are satisfied. The unless conditions of the rule are checked. One of these conditions states that the object’s body must not be plainly ungraspable. Asking about graspability activates the censor relating graspability to hot contents. The censor’s if condition is satisfied, and the censor is about to block the cup-identifying rule. The censor’s unless condition must be checked first, however. The censor’s unless condition pertains to hot bodies. This condition activates a second censor, the one denying that a body is hot if it is made of an insulator. This second censor’s if condition is satisfied, and there are no unless conditions. The second censor blocks the first censor. The first censor therefore cannot block the cup-identifying rule. The rule identifies the object as a cup. It would not have worked if the contents were hot and the body wcrc made from something other than an insulator. Related Work The theory explained in this paper builds directly on two sets of ideas: one set that involves a theory of precedent- driven learning using constraint transfer [Winston 1979, 1981, 19821; and another set that involves model-driven recognition using generalized cylinders [Binford 1971, 1981, 1982; Brooks 19811. Another important precedent to this paper is the work of Freeman and Newell on the role of functional reasoning in design. In their paper on the subject [1971], they proposed that available structures should be described in terms of functions provided and functions performed, and they hinted that some of this knowledge might be accumulated through experience. Another way to learn what things look like is by near misses. Since the use of near misses was introduced by Winston [1970], several researchers have offered improved methods for exploiting near miss information. See Dietterich and Michalski [1981] for an excellent review of work by the authors, Iiayes-Roth, and Vere. Also see work by Mitchell [1982]. Acknowledgments This paper was improved by comments from Robert Berwick, Randall Davis, and Karen A. Prendergast. References Binford, Thomas 0.) “Visual Perception by Computer,” Proc. II511’E Conf. on Systems Science and Cybernetics, Miami, December, 1971. Binford, Thomas O., “Inferring Surfaces from Images,” Artificial Intelligence, vol. 17, August, 1981. Volume 17 is also available as Computer Vision, edited by J. Michael L3rady, North-Iiolland, Amsterdam, 1981. Binford, Thomas 0.) lLSurvey of Model-Based Image Analysis Systems,” Robotics Research, vol. 1, no. 1, Spring, 1982. nrooks, Rodney A., “Symbolic Reasoning Aln011g 3- 1> M()tft~ls and 2-D lrnagcs,” 1’111) TlicsiS, Slanford Univc:y:;il>y, ~orllput(>r Sc*ics!\ce Dc\);trtmcnt report S’l’AN- (;S-gI-g(jl, ]!j)(l. A stlolq,c:r \rorsion is “Syrrll)olic llcasonirig Ar~loll~ :j- 1) ,V(~tl~~ls md 2-D Irnagch,” Artijicial Iritcl- liqcnce, vol. 1’7, AI1[:rlst, 1981. Volume 17 is ;~lso avail- able as Computer Vzsion, edited by J. Michael Brady, North-Holland, Amsterdam, 1981. Dietterich, Thomas G. and Ryszard S. Michalski., “Inductive Learning of Structural Descriptions,” Artificial Intelli- gence, vol. 16, no. 3, 1981. Freeman, P, and Allen Newell, “A Model for Functional Reasoning in Design,” Proceedings of the Second 1nternationul Joint Conference on Artificial Intelli- gence, London, England, 1971. Katz, Boris and Patrick H. Winston, “Parsing and Generating English using Commutative Transformations,” Artificial Intelligence Laboratory Memo No. 677, May, 1982. A version is also available as “A Two-way Natural Language Interface,” in Integrated Interactive Computing Sys- tems, edited by P. Degano and E. Sandewall, North- Holland, Amsterdam, 1982. Also see “A Three-Step Procedure for Language Generation,” by Boris Katz, Artificial Intelligence Laboratory Memo No. 599, Decem- ber, 1980. Mitchell, Tom M., “Generalization as Search,” Artificial Intelligence, vol. 18, 110. 2, 1982. Winston, Patrick Henry, “Learning Structural Descriptions from IXxaniples,” Ph.D. thesis, MIT, 1970. A shortened version is in The Psychology of Computer Vision, edited by Patrick Ilenry Winston, McGraw-Hill Book Company, New York, 1975. Winston, Patrick Henry, “Learning and Reasoning by Analogy,” C/1CM, vol. 23, no. 12, December, 1980. A version with details is available as “Learning and Reasoning by Analogy: the Details,” M.I.T. Artificial Intelligence Laboratory Merno No. 520, April 1979. Winston, Patrick Henry, “Learning New Principles from Precedents and Exercises,” ArtzJicial Intelligence, vol. 19, no. 3. A version with details is available as “Learning New Principles from Precedents and Exercises: the Details,” M.I.T. Artificial Intelligence Laboralory Memo No. 632, May 1981. Winston, Patrick ITcnry, “Learning by Augmenting Rules and Accumulating Censors,” M.I.T. Artificial Intelligence Laboratory Memo No. 678, May 1982. This research was done at, the Artificial Intelligence La- boratory of the Massachusetts Institute of Technology. Support for MIT’s artificial-intelligcr~ce research is provided in part by the Advanced Research I’rojccts Agency of the Department of Dcfcnse under Office of Naval Research contract NOOOl4-80-C-0505. 439
1983
67
263
TWO RESULTS CONCERNING AMBIGUITY IN SIlAPE FROM SHADING Michael J. Brooks The School of Mathematical Sciences, Flinders University of South Australia, Adelaide, S.A. 5042, Australia. ABSTRACT Two shape from shading problems are con- sidered, one involving an image of a plane and the other an image of a hemisphere. The former is shown to be ambiguous because it can be generated by an infinite number of ruled surfaces. The latter, in contrast, is shown to have only the hemisphere and its reversal as solutions, although some subregions of the image are shown to be infinitely ambiguous. I INTRODUCTION The problem of recovering object shape from image intensity has been termed the shape from shading problem [S]. While several methods have been devised to solve a simplified form of this problem, little attention has been paid to the fundamental question of precisely how much can be determined from shading information. This paper addresses the more general question of the cowrput- abiZity of shape from shading. The principal factors determining an image are illumination, surface material, surface shape and image projection. The image-forming process encodes these factors as intensity values. Infer- ring shape from shading thus corresponds to decoding the information encoded in the intensity values [l]. In order to recover surface shape, it is necessary to know some details about the illumination, surface material and projection that were involved in the formation of the image. It may be, however, that there are many shapes that can generate the image under these conditions. If this is so, the image is said to be uflbiguous and the particular surface that generated the image cannot be determined. The shape from shading problem reduces to that of solving a first-order partial differential equation (FOPDE) [S]. Consequently, the comput- ability problem reduces to,determining whether a given FOPDE has a unique solution, or many solutions. In general, this is a difficult mathe- matical problem, as researchers in this area have discovered ([2], [3], [4]). Suppose we are given an image and are told that it represents the orthographic projection of a smooth lambertian surface illuminated by a distant point source located on the axis of pro- jection. We are in a position to form the FOPDE R(p,q) = E(x,Y), where E and R represent the image and reflectance map [6] respectively. The questions arise: (i) if, unknown to us, the image is of a plane, how ambiguous is the image? Are there smooth, non-planar surfaces that could generate the image, and if so, what are they? (ii) if, unknown to us, the image is of a wholly illuminated hemisphere, what is the complete set of smooth shapes satisfying the image? What boundary conditions suffice to make the hemi- sphere solution unique? How is the degree of ambiguity affected by con- sidering only a subregion of the image? The answers to these questions given below are extracted from my thesis [2]. Some aspects of the second question have been considered (in a very different way) by both Bruss [3], and Deift and Sylvester [4]. II FINDING SOLUTIONS BY SOLVING FOPDEs A. An image of a plane Given a plane with surface normal (ps,qs, -1) that is illuminated by a source in the direction (0,0,-l) as described above, we obtain the shape from shading problem implicit in the FOPDE -- /jq-qg-T= case.. 1 (1) :Ierc, coy = 1 l/p2 2+1 , the cosine of the s + 9s incident angle of light. The task is to find all shapes z(x,y) which satisfy this equation over some region A in the xy-plane. It is easily shown that the system of planes 36 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. z(x,y) = ax + Jt2 - a2 y + C (where t = tanei) satisfies (1). Therefore, providing p and q are not both equal to zero, the image is agbiguou$ because it can be satisfied by an infinite number of planes. This is obvious when we consider the reflectance map which provides precisely the set of planes that have a given intensity value (in this case, the set will be one-dimensionally infinite). It remains to determine whether there are non-planar solutions satisfying (1). Indeed there are, since any sections of the cones z(x,y) = + t2 J(x - a>2 + (y - aI2 + Y will satisfy (1) providing the point (a,3) does not belong to the region A (for otherwise the solution would fail to be smooth). Furthermore, it can be shown that the system of surfaces captured by the parametric form r(e,@) = a(e) + @(@I, (2) where a(e) = (+(e)cose - $'(e)sine,+(e)sine + +f(e)cose, 0) b(8) = (c0se , sine, -t), will satisfy (1) for any smooth function $ that is supplied. Naturally, it will be necessary to ensure that a given solution is well-defined and single-valued over A, but some sections of any of the solutions will satisfy (1) after undergoing a suitable translation and dilation (which will always be permitted since the image is of constant intensity). The general solution to (1) given above also shows, interestingly, that all solutions are ruled surfaces (figure 1). B. An image of a hemisphere For a hemisphere imaged under the conditions described in section 1, the shape from shading problem reduces to finding shapes z(x,y) that satisfy the equation /H2+i 1 2 = Jl - x2 - y2 (3) 2-5 +1 aY over various regions A 2 ((x,y) : x2 + y2 < 1). The hemisphere of radius 1 and its concave reversal given by z(x,y) = k jl - x2 - y2 + C are solutions to the equation. To determine other solutions, it is useful to transform (3) into the equivalent polar form Figure 1 Ruled surface solutions to an image of a plane can be constructed by generating functions a and b (using equation (2)) and employing them as shown. Now, the system of surfaces z(r,e) = kB + g(r,k) + M will satisfy (4) provided that i I % 2 k2 r2 +-=-. r2 1 - r2 Consequently, if we choose g(r,k) = -I - - - ' (4) we obtain z(r,e;k,M) = kB f ___- - This is a two-parameter solution with the following properties: (i> z(r,e;O,M) = f r2 v---- ____ dr+bl=kJ1-r2+C 1 - r2 which correspond to the hemisphere solutions given above. (ii) g(r,k) is only defined for L>k'. 1 - r2 r2 37 Thus we require that kik2 +4-k2 2 -<r2<1 and so when k # 0, z(r,B;k,M) is defined only over an annulus in the z = 0 plane. (iii) when k # 0, z(r,e;k,M) is not periodic over 27r and is therefore not smooth over 1 he> : 0 5 r < 11. (iv) g(r,k) is bounded since As r -t 1, g(r,k) -f 41 - ri . The k # 0 solutions to (3) are helical bands defined over an annulus in the z = 0 plane. Any graph defined over 0 4 8 I 2~ that is selected from one of these solutions will not be smooth. Figure 2 gives an example of such a graph. Figure 2 Under the conditiojls described in the text, the helical band shown above will look the same as a similar portion of a hemi- spherical bowl. Some sections of an image of a hemisphere are therefore ambiguous. Brooks [2] shows that there are no solutions other than the hemisphere and its reversal that satisfy (3) over the whole of the disc {(x,y) : x2 + y2< 11". However, this is not true for some sections of the image. Figure 3a shows some subregions over which thereexistmanysolutions and figure 3b shows some subregions over which there exist only two solutions. (b) Figure 3 Shaded subsets of an image of a hemisphere. The subsets in figure (a) have an infinite f number o those in solutions, while figure (b) have two. Any subregion of C ( that does not completely (X>Y) = (0,O) will have shapes. Conversely, any c (X,Y> :o<x2+y2<1 X,Y) : 0 < x2 + y2 < 1) surround the point nfinitely many solution subregion of that contains or surrounds (x,y) = (0,O) will have only two solution shapes -the hemisphere and its reversal. The inclusion of the point (x,y) = (0,O) in the image severely constrains the number of possible solutions. consider (3) This is not surprising when we re- and note that the surface normal of any solution at (O,O,z) is bound to be (0,0,-l). 111 CONCLUSION Our understanding of the computability of shape from shading must be improved. This will lead to a better appreciation of the limitations of current algorithms, and will help reveal the boundary conditions necessary for a solution to be determined. For example, it will prove useful to know that a region of constant intensity must correspond to a ruled surface (if it was formed according to the conditions given earlier). Any available boundary conditions can then further reduce the size of this already restricted solution set. XThis requires consideration of possible envelope solutions. 38 ACKNOWLEDGEMENTS I would like to thankJohn Rice for his mathe- matical assistance and encouragement. Bruce Nelson offered valuable comments on a draft of this paper. REFERENCES [II E21 [31 [41 [51 C61 Barrow, H.G. and Tenenbaum, J.M., (1978). Recovering intrinsic scene characteristics from images, in Computer Vision Systems, Hanson, A. and Riseman, E. (eds.), Academic Press. Brooks, M.J.,(1982). Shape from shading discretely, Ph.D. thesis, Department of Computer Science, Essex University. Bruss, A.R., (1981). The image irradiance equation: its solution and application, Ph.D. thesis, Artificial Intelligence Lab., Massachusetts Institute of Technology. Deift, P. and Sylvester, J., (1981). Some remarks on the shape-from-shading problem in computer vision, Journal of mathematical analysis and applications, 84, pp. 235-248. Horn, B.K.P., (1970). Shape from shading: a method for obtaining the shape of a smooth opaque object from one view, MAC TR-79, MIT. Horn, B.K.P., (1977). Understanding image intensities, Artificial Intelligence, 8, pp. 201-231.
1983
68
264
FIND-PATH FOR A PLJM&CLASS ROtIOT Rodney A. Brooks MIT Artificial Intelligence Laboratory 545 Technology Square Cambridge, Massachusetts, 02139, U.S.A. Collision free motions for a manipulator with revolut,e joints (e.g. a PUMA) are planned through an obstacle littered workspace by first describing free space in two ways: as freeways for the hand and payload ensemble and as freeways for the up- perarm. Freeways match volumes swept out by manipulator motions and can be “inverted” to find a class of topologically equivalent path segments. The two freeway spaces are searched Dl;lrlipUliItbrS. Schwarl 7. arltl S!l:lrIlir [198Z] h;r\‘r c!c,rrlon’;l rnt(sd the cxist,ence of a polyilornial algorithm for a gc>rlcral hinged device. Unfortunately the best known time bound for the algo- ABSTRACT concurrently under tion of the forearm projection of constrai nts determined by mo- I. Z_ntroduction A key component of automatic planning systems for robot assembly operations is a gross motion planner for the manipulator and its payload. Motions of the manipulator should avoid collisions with obstacles in the workspace. In this paper we present an new approach to collision free planning motions for a manipulator with revolute joints (e.g. a PUMA). It is based on a method presented at AAAI-82 for planning motions for a polygon through a two dimensional workspace (Brooks (1983a)). Free space is described in two ways: as freeways for the hand and payload ensemble and as freeways for the upperarm. Freeways mat,ch volumes swept out by manipulator motions and can be “inverted” to find a class of topologically equivalent path segments. The two freeway spaces are searched concurrently under projection of constraints determined by motion of the forearm. The sequence in figure 1 illustrates a path found by the algorithm. A key characteristic of our solution is that it solves a richer class of problems than merely finding safe paths for a manipulator with payload. On failure it can provide informa- tion to a higher level planner on how to alter the workspace so that it can find a solution to the new problem. rithm for a six degree of frcctlorn mariipulalor is O(n6’) where n is polynomially dependent on the number of obstacles. The algorithm is of theoretical interest only. Practical algorithms have been few, and fall into two classes. 1. Lozano-Perez [1981, 19831 restricted attention to car- tesian manipulators. The links of the manipulator can not rotate and so the joint space of the manipulator corresponds exactly to the configuration space for rnotion of the payload alone. 2. Udupa [1977] and Widdoes [1974] presented methods for the Stanford arm. Both rely on approximations for the payload, limited wrist action, and tesselation of joint space to describe forbidden and free regions of real space. The problem with tesselation schemes is that to get adequate motion control a multi-dimensional space must be finely tesselated. B. The problem to be solved The algorithm presented below is not a complete solution to the find-path problem. It is restricted in the following ways. We find paths where the payload is moved in straight lines, either horizontal or vertical, and is only re-oriented by rotations about the vertical axis of the world coordinate system. Thus for a six degree of freedom PUMA, joint 4 is kept fixed (a 5 dof PUMA has no joint a), and joint I, is coupled to the sum of the angles of joints 2 and 3 so that the axis of joint 6 is always kept vertical. Thus we consider only 4 degrees of freedom for the PUMA. The payload and the hand are merged geometrically, and the payload is considered to be a prism, with convex cross section. rotates. The payload can rotate about the vertical, as joint 6 Obstacles in the work space are of two types: those sup- ported from below and those hanging from above. Both are detailed for finding edra through space The problem is much harder for general articulated This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory’s Artificial Intelligence research is provided in part by the System Development Foundation, in part by the Office of Naval Research under Office of Naval Research contract N00014--81-K-0494, and in part by the Advanced Research Projects Agcrq under OlTice of Naval Research contracts NOOO14--80-C- 0505 and N00014-82- K-0334. prisms with convex cross sec?,ions. Non-convex obstacles can be modelled by overlapping prisms. Prisms can be supported from below if they rest on the workspace table or on one another as long as they are fully supported. Thus no point in free space ever has a bottom supported obstacle above it. ,Work is currently under way to extract such obstacle descriptions from depth measureinents from a ‘stereo pair of overhead cameras. Similar prc--defined obstacltss may also hang from above intrud- ing into the workspace of the upper--arm and fore-arm. The class of motions allowed sul‘ficc for most assembly operations, and with appropriate algorithms for re-orienting the payload without major arm motion, the algorithm can provide gross motion planning for all but the most diflicult realistic problems. From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Figure 1. A path found by the algorithm. Part (a) is the initial configuration. Part (b) is a plan view (rotated 5). Part (c) is a plan view of the payload path. The remaining images (top to bottom, left to right) show the path. Figure 2A. The PUMA has six revolute joints. It can be decomposed into three components: the upperarm, the forearm, and the combined wrist, hand and payload. Figure 2B. A freeway is an elongated piece of free space which describes a path between obstacles. 41 Figure 2C. The definition of R(E), the radius function of an object, (a). Part (b) h s ows the geometric construction of R(E) = dz cos(S - qz), (locally). Part (c) shows function R in polar coordinates. Tgure 2D. The workspace of figu re 1 has three different horizontal cross sections. Here we see the spines of freeways for the payload in the lowest cross section in which they are valid. II. Payload Space _-.-- Figure 2R is a side vi& if. a PUMA robot grasping an ob- ject. During gross motion we treat the mrist, hand and payload as a single object. It can translate through space (joints 1, 2 and 3 provide the motion wh11e joint 5 compensates for orientation changes to keep the wrist stem axis vertical) and rotate about the vertical axis (joint 6). The first step of the find-path algorithm is to find a prism (the payload prism) which contains the wrist, hand and payload, with axis of extension parallel to the wrist stem. This can be simply done by taking the convex hull of the projection of these parts into the horizontal plane then sweeping it up from the base of the payload object through the wrist. Since obstacles must be supported completely and motions will only be horizontal or vertical it follows that it suffices to find a path for the bottom cross section of the payload prism. A. The problem in two dimensions Brooks [1983a] demonstrated; new approach for the prob- lem of moving a two dimensional polygon through a plane lit- tered with obstacle polygons. It was based on two ideas. (1) Free space can be represented as overlapping “freeways”. A freeway is a channel through free space with a straight axis and with a left and right radius at each point. Figure 2B illustrates. (2) The moving object can be characterized by its radius func- tion, defined in figure 2C. The radius function characterizes the left (i.e. R(B + 3)) and right (i.e. R(6 - 5)) radii of the object as it is swept along in some direction 6. The key point is that radius functions and sums of radius functions can be easily inverted. Thus given a freeway and the radius function of a moving object it is simple to determine the legai range of orientations wllich lrad to no collisions when the object is swept down the freeway. In the implementation described in this paper freeways are currently restricted to having constant left and right radii along their length. The first part of figure 2D shows the “spines” of freeways found at table top level of the scene in figure 1. B. Adding vertical variations -------~ There are at most as many different horizontal cross sec- tions through the workspace as there are obstacles. We can apply the algorithms of Brooks [1983a] to each different cross section and derive a set of freeways valid over some horizontal slice of the workspace. Since obstacles must always be completely supported by others below it is true that any freeway valid at some height ho is also valid for all h > ho. Thus successively higher cross sections will inherit freeways from below. Figure 21> shows a series of cross sections through the example workspace of figure 1. These are plan views, rotated 4 from the original scene. Each cross section includes the spines of all freeways which are valid for the first time at that height. Their validity extends vertically upwards through the remainder of the workspace. (Recall that we assume that the hanging obstacles do not protrude into the payload workspace.) III. IJpperarm Space -- The upper-arm (see figure 2A) is large and can easily collide with any significantly sized obstacles in the workspace. Its motion is both controlled and constrained by joints 1 and 2. Let the state of these joints be described by variables q5 and Q: respectively. If cr = 0 then the upper-arm is sticking out horizontally. The class of possible motions of the arm can not be easily characterized as translational constraints as was the case with the payload. It is better to describe freeways for it in the configuration space (Lozano-Perez [1981, 19831) of 4 and Q. Furthermore the upper-arm can not change orientation in &-cr space and its shape is fixed (in contrast to the payload which changes shape as things are picked up and put, down). over all time. Therefore we can compile in special knowledge of that shape (in contrast to the general R(O) function used for the payload). A. Constraints in a-space ______ Consider figure 3A: a cross section of the upper-arm and an obstacle for a particular fixed 4. All lengths in the diagram are labelled with positive values. The presence of the obstacle puts constraints on the valid range of angles of joint 2. Clearly the presence of the obstacle implies that Q > t + q = atan(z, m) + rl. It remains to determine 7. Let r = d/m2 + x2. Then x = r cos 7 and so rsinq = bl -alrcosrl, whence Figure 3A. A cross section of the upperarm and its interaction with a prism obstacle. Equation (3) is the constraint derived from this figure. Figure 313. A plan view of the interaction of the ui)perarrn and the top edge of an obstacle prism. 42 and so 7 = arcsin rh>Ti - atanh 11, whence tv > atan(z, m) - atan(al, 1) + arcsin J&q&c (a12 + 1)’ (3) Due to the restrictions we have placed on our models of objects in the world we can conclude that vertices such as v in figure 3A must be points on the top edges of prisms. The problem to be solved is to determine how m and z vary with changing 4. The inequality, with appropriate sign changes, is valid for obstacles intruding into the workspace from above the arm also. We can ignore collisions with the end of the upper arm as such a collision would also result in a collision for the fore-arm. We find it more convenient to catch such collisions under our fore-arm analysis. B. Constraints in &space Refer again to figure 3A and consider a varying 4 and the interaction of the upper-arm with a prism edge. Clearly z remains constant, but m can vary. Refer to figure 3B. Suppose the edge has a normal with orientation vg and further that the edge has distance D from the origin. Let d be the displacement of the upper-arm from the axis of joint 1 of the arm. Then m= D - d sin($ - vg) cos(q5 - v()) - Direct analysis of the effects on Q of this formulation for m is difficult. However we can observe geometrically that m depends rather directly on the distance from 0 to I’. The minimum possible OP is n (so long as the upper -arm intersects the edge for $J = va) or the value at one of the end points for the edge. Similarly the maximum can only occur at the end of an edge. When OP has length D then m=diFTF. Thus by examining at most three values for 4 we can determine maximum and minimum values for m. Now consider the three terms on the right of inequality (3). The second term is constant with respect to 4. The third term increases as m decreases, and thus has its maximum over an edge for the minimum m. The behavior of the first term depends on the sign of z. For positive z (i.e. for obstacles higher then the axis of joint 2) the second term has maximum value for minimum m, whilst .for negative z the maximum occurs at maximum m. Thus for a given z we can quickly produce a conservative bound on Q over the range of 4 where the upper- arm might intersect the edge. Detailed analysis shows that for most obstacles this conservative bound is very good. For a prismatic obstacle in the workspace we take each top edge and determine conservative a bounds over a 4 range for each edge which “faces” the base of the manipulator. The resulting @-CY boxes are then merged and bounded by single &-cr box. For large obstacles with many edges it is better to approximate by a series of adjacent +-a boxes. The Fig 3C series shows the &-a boxes generated by the scene in Figure 1. Notice that only the overhead obstacle and the larger obstacle on the table are represented. The small obstacle can not be reached by the upper-arm and so does not place any constraints on &-cr space. The analysis a.bove has been done for an upper-arm that is paper thin. An upper-arm with real thickness has infinitely many such cross sections, all of which can contribute con- straints on cr. Observe however that as the upper-arm is rotated downwards towards a horizontal edge then the arm will hit with one of its edges first (except when 4 = ~0, when the lower sur- face strikes simultaneously across a line segment). Thus we need consider the constraints a.rising only from the two extreme cross sections (note that they have different d’s so the 4’s which must be considered are different). C. Freeways in 4-a space We have successfully reduced the collision avoidance prob- lem for the upper-arm in real space to one of an path finding problem for a point in a space populated with rectangular obstacles aligned with the axes of the space. Furthermore, since we allow only prismatic obstacles stacked on each other on the table, or hanging from above, there can be no “free-floating” obstacles in #-a, space. We can easily compute “freeways” in this space and their spines are illustrated in the second part of figure 3C. Searching these freeways for a path is also trivial and the third part of 3C illustrates the connected chain of +-(r freeways which must be negotiated to solve the problem shown in figure 1. IV. Projecting Constraints Our find-path algorithm is based on propagating con- straints on motion of the upper-arm, fore-arm and payload through the searches for paths for these three physical com- ponents. In this section we examine the propagation of constraints on the upper-arm to become constraints on the payload. Consider the plan view of figure 4A, and the problem of moving the payload along a horizontal path segment. The maximal value for CY must occur when m is minimal (note that this m and the d from the diagram are different (but similar in concept) from those of section III). That will occur when m=diFTP i.e. at 40 = vo + atan(d, m). Thus the maximal value for a occurs at $0 if that is in the range of the motion segment or at one of the extremes of the segment. Similar reasoning shows that the minimal value for (x must occur at one of the segment extremes. Notice that the points of maxima and minima do not depend on the height of the segment of motion. The values of those maxima and minima will however. Consider a fixed point (z, y) in the table plane and how the value of a! varies as the payload is lifted vertically above that point. Refer to the plan and side views of figure 4B. The side view is a cross section through the arm parallel to the upper and fore arms. Notice first that r = dx2 + y2 - d2. Clearly the maximum achievable CY is 7-- 12 Q m = arccos - 11 Thus for an interval [or, ~21 the minimum and maximum heights which can be achieved by the end of the fore-arm are given by w + l1 sin Q - d Ez2 - (r - 11 cosc~)~ for CY = (~1 and CY = min(crz,cr,). From this we can readily compute the bounds on payload height for a particular $ whilst moving along a path segment subject to the constraints of a &CY freeway. The argument above however implies that by considering the two end points of the motion segment and the point 40 (when it is interior) and intersecting all the height constraints we are guaranteed a safe set of heights which we can use for the motion along the segment. V. Search Paper length constraints preclude a detailed description of the final search process. In essence path planning proceeds as follows. A path through &a space is found for the upper-arm. It is a list of $--a~ freeways. Now a depth first search is done through payload space under the guidance of this list of constraints on LY (as detailed in the previous section). Notice that the upper-arm puts constraints on the height of travel for the payload within a particular freeway. If a freeway section is chosen in depth first search, and these constraints provide a non-empty range of heights then the path segment is certainly valid for the payload and upper-arm. The problem of fore- arm collisions remains. Such collisions are checked for each path segment during depth first search. Observe (figure 5) that over a given vertical line segment for the payload in the workspace, the fore-arm is closest to horizontal for the top of the segment, and more acute at the bottom. Furthermore for a particular height as we travel along a horizontal segment the arguments of section IV apply also to the fore-arm. Thus we can easily compute a bounding volume for the swept volume for the fore-arm travelling along the maximal height allowed on a segment. We project that swept volume down onto the table plane and determine which prisms it “shadows’, from above. The prisms are intersected with this volume and then each prism upper vertex has a distance below the fore-arm swept volume. If any distance is negative then the motion along the horizontal segment is completely forbidden. Otherwise the minimal distance gives a safe bound on how far we can lower the payload during traversal of the horizontal segment without producing a collision for the fore-arm. Figure 3C. The obstacles in c&-CY space (4 is the horizontal axis and cr the vertical) derived from the workspace of figure 1. The small obstacle in figure 1 is out of the reach of the upperarm so does not appear here. The freeways in this space and a planned path instance for the upperarm are also shown. VI. Conclusion By restricting the class of solutions we look for in the general find-path problem for a robot with revolute joints we have developed a practical path planner. The complex example path of figure 1 is found in less than 1 minute on an original MIT lisp machine - such machines have no floating point hardware and in general are much slower than, say, a VAX 11/780. Notice that besides lowering the upper-arm to get under the obstacle protruding into the workspace from above the planner had to rotate the payload so that it could squeeze around the outside of the obstacle on the table top! References Brooks, Itodney A. (1983a). Solving the find-path problem by good rcprc~sentalion of free space, IEEE Trans. on Systems, Man and Cybernetics (SMC-13):190-197. Brooks, Rodney A. (1983b). Planning Collision Free Motions for Pick and Place Operations, MIT AI-Memo 719, May. Lozano-Perez, Tomas (1981). Automatic Planning of Manipulator Transfer Movements, IEEE Trans. on Systems, Man and Cybernetics (SMC-11):681-698. --- (1983). Spatial Planning: A Configuration space Approach, IEEE Trans. on Computers (C--32):108-120. Schwartz, Jacob T. and Micha Sharir (1982). On the Piano Movers Problem II: General Properties for Computing Topological Properties of Real Algebraic Manifolds, Department of Computer Science, Courant Institute of Mathematical Sciences, NYU, Report 41, February. Udupa, Shriram M. (1977). Collision De tee tion and Avoidance in Computer Con trolled Manipulators, Proceedings of IJCAI-5, MIT, Cambridge, Ma., Aug., 737-748. Widdoes, L. Curtis (1974). Obstacle avoidance., A heuris- tic collision avoider for the Stanford robot arm.Unpublished memo, Stanford Artificial Intelligence Laboratory. in moving the Figure 4A. k plan view of the manipulator kinematics payload along a horizontal straight line segment. Figure 4B. A side view of the manipulator kinematics. Figure 5. During vertical motion of forearm is closest to horizontal at the and more acute at the lower bound. 11 the payload upper bound the manip of payload ulator travel
1983
69
265
ABSTRACT DIAGN EXPLANATIONS OF STRATEGY OSTIC CONSULTATION SYSTEM Diane Warner Hasling Heuristic Programming Project Cornputer Science Department Stanford University Stanford, CA 94305 ABSTRACT This paper presents the explanation system for NEOMYCIN* , a medical consultation program. A consultation program plays the role of an cxpcrt to &St a user in solving a problem. An explanation of strategy describes the plan the program is using to reach a solution. Such an expl,mLtion is usually concrete, referring to aspects of the current problem siWtion. Abstract cxplanntions articulate a gcncral principle, which can be applied in dil’fcrcnt situations; such cxplan,~tions arc useful in teaching and in cxl)laining by analogy. WC describe the aspects of NEOMYCIN that make abstract strategic explanations possible-tic rcprcscntation of strategic knowledge explicitly and separately from dorn,Gn knowlcdgc--and demonstrate how this rcprcsentation can be used to gcncrnte explanations. I Ii’lTRODUCTION ‘l’hc ability to explain rcasonin g ml actions is usually considcrcd an important component of any cxpcrt sysrcm. An explanation f,icility is useful on several IcvcIs: it can help knowledge engineers to debug and test [hc system during dcvclopmcllt, assure the sophisticated user that ihc systcrn’s knowlcdgc and rc<lsoning process is approl)ri&c, and inslruct the naive user or stud<:nt about the knowlcdgc in the system. Several approaches have been used in existing explanation systems. For csam~~lc, Shortliff’c (Shorlllffc, 1976) and IIdvis (Davis, 1976) inrroduced t& itic, of gcncrating cxplarlntions by translating rule? that direct a consultation. Swnrtout (Swarrout, 1981) uses an aulomatic programming approach to create a static “rcfincmcnt structure”, which can bc examined during the consultation to provide justifications of the compiled code. A s/rnir~~~ is “a careful p!an or mcrhod, especially for achieving a end.” ‘i’o esphin is “to make clear or pl,~in; to giVC the reason for or cause of.““” Thus in a srrcl/egic cxplmcr/ion WC arc trying to make clear the plan5 and methods used in reaching a goal, in NEOMYCIN’s cast, the diagnosis of a mcdicnl problem. One could imagine cxpl,lining an action in at least two ways. In the first, the specifics of the situation arc cited, with ~hc strategy remaining rclativcly implicit. For cxamplc, “I’m asking H hcihcr the patient is receiving any medications in order to dctcrminc if she’s receiving penicillin.” In the second approach, the underlying strategy is ni‘ldc explicit; “I’m asking whether stra:cgic explanation we wanl to gcucratc. ‘1%~ gcncral approach to solving the problem is m?ntioncd, ;I:, well as the action taken in a particular citu,ition. i<xplanalions of tliiq type allow the listcncr to see the lar?cr problcln-solving approach and thus to cxarninc, and perhaps learn, the strategy being employed. Our work is based on the Ilypothesis that an ‘undcrstnndcr’ must have an idea of the prcjblcm-sol\ ing process, as well as domain knowlcdgc, in order to understand lhc solution or solve the problem hilrmlf (I\rown, 1978). Specifically, I’ CSCdrCh in mCdiCd education (I-llstcin, 1378), (Ucnbaselt, 1976) suggests that WC state heuristics br sludcnls, te,lching them explicitly how to acquire d&i and form di+,noc,tic hypotheses. Other Al programs habc illustrated the impc;rt;lnce of stratcgics in explanations. S~II<I)I.U (Winogr,ld, 1972) is an c,!rly program th,lt incorporates history keeping to provide WI IY/lIOW expl~nntions of procedures used by a ‘robot’ in a simuLitc4 II!.OCI<SWOI~I.l) C:lVilUil~Wllt. ‘I‘hc prc;ccdurcs of this robot arc specific to the cnvironrncnt; conscqucnlly, .J,str:ict cxplnn,riions cuch a\ “I mo\cd the red block lo trcl~ic~ /~~(~~/~di/io,lr OS a Iligl!ef gollr’ al‘cf not possible. CEN’I‘,\LII< (Aikins, l%O), another nicdtcal consulWlon system, cxpl,tins its .\ctions in tcrrns of tlornain- spccilic ;ll;cr,itions ,mcl diagnostic prototypes. Swnrtout’s ,XPl,A IN progr.iiu rcfcrs to dom,lin princiJ)lcs--gcncr~ll rules and constraints about tljc domain--in its cxpl,m,&)nq. 111 each of thcsc p:‘vg,rnm*;, abstract !)rinciplcs h~ce lxcn instantiated and rcprc.;cntcd in problem- spccilic tcrnis. NI:OMYCIN gcncmtcr strategic explanations from ‘in abJi/lic/ rcprcscnt(Kion of str,iQy. in contr‘ist with other ,Ippro,~hcs, this stlat:cic knowlcdgc is coI:IplcIcIy scp,lrnlc f~onl the dom,rin knowlcdgc. This SCilClJl SLratcgy is insi,inli:llc’cl dyn,nilicnlly as Ihc conxiilt,~tion runs. ‘I hI!s bhcn the prograni dic;cuss,cs the prohlcm solution, it is ‘lblc to stntc a :cncral approach, as well ns how it applies in concrctc terms. II WOW STRA-IEGIC EXPLANATIONS ARE POSSIBLE -- ----- THE NEOMYCIN SYSTEM MYCIN (Shortliffc, 197(i), the precursor of NlX7kiYCIN, is unnblc to explain its strategy bccausc much of the strategic information is implicit in the ordering of rule cl~scs (Clanccy, 1983a). Ill NEOMYCIN, the problem-solving str.itcgy is both explicit and general. This section provides an ovcrvicw of the rcprcscntation of this str,ltegy in NIIOMYCIN, since this is the basis for our strntcgic cxplnnarions. Other aspects of the system, such JS the discasc taxonomy and other structuring of the domain knowlcdgc, XC‘ dcscribctl m (Clanccy, 1981). NEOMYCIN’s strategy is structured in terms of tasks, which correspond to rnctalevel goals and subgoals, and mctalcvcl rules (r~~&.r), which arc the methods for achieving thusc goals. ‘I’hc mclartllcs invoke other tasks, ultimately invoking ~hc base-lcvcl intcrprctcr to pursue domain goals or apply domain rules. ,:igurc 1 illustrates a portion of the task structure, uith mct,lrulcs linking the casks. ‘I‘hc cllkc structure ctlrrcntly inclutlcs 30 t,tsks and 74 nlctarulcs. From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. ‘I’csting an hypothesis is just one rc;~son for asking a question. Others Data Differential Information Decision / I ’ Establish Generate P‘;\ocess Questions Hypothesis Questions Hard I..1 ysj=+c Process Group Explore Explore Ask Datum Hypothesis Hypothesis Hypothesis Process Datum Test Hypothesis (Virus) 4 09 010 (Febrile) /\ d5 (16 Figure I: Inioc:ltion of f:rsks ill the cxamplti NFXJM\‘CIN consrrllation QiIcstion numbers co~~c:;p~~~~l to questions asked in the consultation, solid lines show tasks actu:illy done, dotted lines tllosc which might bc done. Norc how tasks such as’l’llS’1‘ I IY W’l‘l-1 f3IS arc invoked multiple times by a given task as well as by diffcrcnt tasks. This task structure rcprcscnts a gcncral diagnostic problem-solving method. Although our base-lcvcl for dcvclopmcnt has been mcdicinc, none of the tasks or mctarules mention the medical domain. As a result the strategy might be ported to other domains. (SW (Clancey, 1983b) for I‘urthcr discussion.) An ordered collection of metarulcs constitutes a procedure for achieving a task. Each mctarulc h‘ts a premise, which inclicatcs when rhc mctalulc is applicable, and all action, indicating what should be done wh~.~~er the prcmisc is satisfied. Figure 2 is a high-level abstraction of a task and its mctnrulcs. The premise looks in the domnin knowlcdgc base or the problem-solving history for findings and hypotheses with certain properties, for example, possible follow-up questions for a recent finding or a subtype of an active hypothesis. Associated acticns would bc to ask the user a question or call a task to rcfinc the hypothesis under consideration. PREVIOUS TASK TASK invokes w/ other *etarules / Examine: hypothesis list, recent findings, domain causal and taxonomic relations / Find out about a domain \ I APPLY Invoke other domain tasks rules c 1 niti identification of the problem * I~ollow-up qiicstions to specify rcccnt information in grcatcr detail e Questions triggered by rcccnt data that suggcstcd an hypothesis 8 Gcncral questions to dctcrminc the complctcncss of the case history 0 Screening data questions to dctcrminc the of desired 0 Q:lcstions asked while pursuin, ., (7 ~bgo:~ls of the &main. ‘1%~ number of Icasoi1s for asking questions tcstifics to the colnplcxity of Nl’OhlYCIN’s diagnostic strategy and illustrates why strategic cxpLm;ltions arc both uscfui and intcrcsting. III NEOMYCIN’S STRATEGIC EXPLANATIONS A. Desiqn Criteria In dctcrmining whdt the program should explain and how it should bc explained, WC used scvcral design criteria: 8 Explanations should 77~1 prcslrppow utty pariict~lnr user popukt~iotr. ‘I’hc long-range goal of rhi\ rcscarch is to use NIKIMYCIN as the foundAon of ,I teaching system. At that point the strategic explanations dcvelop4 hcrc will bc used to teach the strategy to students to whom it might be unfamiliar. ‘I’cchniques used io gcncrntc cxplanntions should bc flcxihlc enough to accommodate a model oi the user. Q Explanations should be names arc not sufficient. itlforttiulive: rule nun+~s or task e Explanations should bc cotrcrele 01’ ctD\/ttrc/, tlcpcnding upon the situation. Thus it must bc possible to produce explanations in cithcr form. ‘I’his should facilitate understanding both of the strategy and how it is actually applied. Q) Explanations should bc u.sefLl f;jr /Ire desigtrer, (7s well as fhc end user of NEOMYCIN. ‘fhc vocahularics of computer scicncc and an application domain, such as mcdicinc, are diffcrcnt in many ways. Pcoplc tend to bc most comfortable with the vocabulary of their licld; the system should have the flexibility to accommod,rtc a uscr- dependent choice of terminology. e Explanations should be possib!e (11 /Ire l~v~.ct level of inlet-es/; the “grdin Icvel” should hc fine enough to permit this. To allow for use in dcbucging, we chow the lcvcl of rules and tasks as our loncst Ic~l. I Iighcr lcvcl cxplnnations can later bc gcncr,ltcd by ornitling d&ails below the appropriate Icvcl. 6. WHY and HOW Explanations Excerpts from a NEOMYCIN consultation illustrate the strategic explanation capability most clearly. User input follows the double asterisks (**) and is shown in capital lcttcrs. ‘I’hc consultation begins: Figure 2: hhstraction of ;I Task and Its Metarules Please enter information about the patient. +* WHY Name Age Sex Ilace [i.e. WHY are we trying characterize the data just 1) l * MARY 42 F LATIN0 obtained and develop the differontial?] Please describe the chief complaints: [2.0] We are trying to give thorough consideration to any rocont findings. +* HEADACHE ** STIFF-NECK ** NAUSEA .I 2) How many days has Mary's headache lasted7 ** 10 3) How severe is Nary's headache (on a scale of 0 to 4 with 0 for very mild and 4 for very severe)? lb* WtlY The fact that, Mary has sufferod from headaches recently is a nn:r finding, which first must bo characterized and then used to assist the diagnosis. [back to qllostion 3...] +* WllY [i.e. WHY are we trying givo thorough consideration to any rocsnt findings?] [i.e. CHY is it important to determine the severity of Mcry's headache?] [l.O] \!u are trying to characterize the data just obtained and devalop the difforontial. [3.0] We ar0 trying to gather idantifying informstion-- the chief complaint and major problems. The user must state the problem. [back to quostion 3...] Headache severity is a characterizing feature of headache. [back to question 3...] ** 3 DIFFERENTIAL: (L’ IRLIS 564) (CHRONIC-MENIWGITIS 422) To explain WHY something is being done by NEOMYCIN, WC first restate the clucsrion as, “WIIY arc WC trying to do this task?” llccausc each task ir inbokcd by a mct,uulc in another task (XC Figure 2), we ans\bL>r rhc query by srating the task and mclarulc that iuvokcd the task in question. At the lowest Icvcl, thcrc arc implicit tasks of requesting data and applying rules. ‘I’hc template for an initial WHY explanation Questions 4 and 14 below arc both asked as ,I result of the ‘I’l+S’I’- I l’r’fV’l‘l ll:SlS task. Notice how the eupl;mntions of this single Llsk reflect the different situations of the two WIIY question% IIiffcrcnt m&rules apply--question 4 is based on a strong “triggering” relation, while question 14 is based on only moderate evidence. <request for data> ** WHY [i.e. WHY is it important to determine <data>?] 4) Does Mary have a fever? ** WHY [i.e. WHY is it important to determine whether Mary has a fever?] Cl.01 <immediately prsceding task> [4.0] We are trying to decide whether Mary has an infection. <what is true about the domain knowlodge base or the problem-solving history that enables the metarule that Fever is strongly associated with infection. accomplishes this task to succeed> In this cast’ the current mctarulc is: ** YES [back to question 4...] METAPULEC73 IF thoro is a datum that can be requested that is a charrictoriring feature of the recent finding that is currently being considered, THEN find out about the datum. . 14) Does Mary have a history of granulomas on biopsy of the liver, lymph node, or other organ? ** WHY WC mentioned c,u%cr the idea of providing concrete or abstract cxpl,inntions as a~3propriatc. In ~hc context of a diagnostic consultation, it seems rcasonablc to provide concrete explanations when possible. Abstract cxplan,~tions can always bc produced, however. The abstract c:cplanaLion in this situation is: '*ARSTRACT [i.e. WHY is it important to determine whether Mary has a history of granulomas on biopsy of the liver, lymph node, or other organ?] [lQ.O] We are trying to decide whether Mary has tuberculosis. A history of granulomas on biopsy of the liver, lymph node, or other organ makes tuberculosis more likely. [i.e. WHY is it important to determine the severity of Mary's headache?] [back to question 14...] Cl.01 We are trying to characterize the data just obtained and develop the differential. A datum that can be requested is o characterizing feature of tho recent finding that is currently being considered. When subsequent WHY’s arc giten, WC ascend the strategy tree, as illustrated below. HOW is nskcd in terms of a previously mcntioncd task. The explanation presents any mct~mllcs associated with the rclcvnnt instance of the task that hnvc been completed, as well x the one currently being executed. As a result of the structure of tasks and mct,lrtllcs, the WHY/HOW explanations have an intcrcsting property. I\ WHY explanation csrcntially states the premise of the mct:l’ulc; this is exactly the reason the mctarulc succeeded. A IHOW explanation is a statement of the action of a mctarulc; this is exactly what was done. 159 In the next example, WC are asking 1HOW about a low-level Note that the answer is at the lowest level of applying rules. ** HOW 19.0 [i.e. HOW are we trying to tuberculosis?] task. whether Mary has We did the following: [20.1] Applied a rule mentioning a causal precursor to tuberculosis (tb risk factor: rule625). and [20.2] Gave thorough consideration to any recent findings. [20.3] Applied a rule mentioning general evidence for tuberculosis (focal neurological signs: rule366). and [20.4] Gave thorough consideration to any recent findings. [20.6] Applied a rule mentioning general evidence for tuberculosis (colapromised host: rule309), and [20.6] Gave thorough consideration to any recent findings. We are now: [20.7] Applying a rule mentioning general evidence for tuberculosis (a history of granulomas on biopsy of the liver, lymph node, or other organ; rule002). We will next: [20.8] Give thorough consideration to any recent findings. [back to question 14...] The HOW question below refers Lo a high level task (ESTAI’,T.ISH- HY l’OTHESlS-SPACE) mentioned in the omitted sequence of WHY’s. Notice that the explanation is at a high level and is cntircly abstract. 17) Has Mary beon hospitalized recently? ** WHY MYCIN Basic reasoning: goal -> rule -> subgoal NEGMYCIN Basic reasoning: task -> matarule -> subtask A goal is pursued to satisfy A task is pursued when the r,l'un;ise of a domain rule executing tho action of a (backward chaining) metarulo (forward reasoning with rule sets) To axplain ~C/I~J a goal is To explain IY/QJ a task is pursuad, cite the domain rule done, cite the meturule that that uses it as a subgoal invokes it (action) (pre1n.i se) To explain how a goal is determined, cite the rules that conclude it To explain how a task is accomplished, cite the matarules that achieve it Besides thcsc strategic WI [Y’s and HOW’s, the user can ask about the current hypothesis, the set of hypoLhescs currently being considered, and evidence for hypothcscs at the domain level. C. Implementation Issues We mentioned earlier that NEOMYCIN was designed with the intent of guiding a consultation with a general diagnostic strategy. A given task and associated mctarulcs may bc applied scvcral times in different contexts in the course of the consultation, for example, testing several hypotheses. To produce concrctc cxplnn&ions, WC keep records whenever a task is called or a mctarulc succeeds; this is sometimes called an audit trail. DaLa such as the focus of the task (e.g., the hypothesis being tested) and ~hc mctarulc that called it arc saved for tasks. Mctarulcs that succeed arc linked with any Additional variables they mampulntc, as well as any information Lhat was obtained as an immediate result of their execution, such as questions that wcrc asked and their answers. When an cxplnnation of any of these is rcqucstcd, the general translations are insLanti&cd with this historical information. Figure 4 presents several mctarulcs for the ‘I’I~ST-I IYPO’I’HESIS task translated abstractly. ** HOW 26.0 [i.e. HOW are we trying to develop the differential basic history and pertinent physical exam?] using We did the fOllOWing: [26.1] Got a general idea of the problem: categorized it into one of several pathogenic classes or disease loci, or both. [26.2] Confirmed and refined the through specific questions. We are now: differential diagnosis [26.3] Rounding out the diagnostic information by looking generally into past medical history and by reviewing systems. [back to question 17...] NftOMYCIN uses an explanation approach sirnilar to MYCIN’s, that 0; explaining its actions in terms of goals and rules, so a brief comparison of the two systems is useful (Figure 3). ‘l‘hc 5trucLurc of explanations is parullcl, cxccpt that in MYCIN rules inI okc subgoals through their prcmiscs, while NI’OMYCIN nlctarulcs invoke r,ubtask:; through their actions. What makes NtIOMYCIN’s cxplan:ltionr different is th,il Lhcy arc gcucratcd at Lhc Icvcl of gcncral str:ltcgics, instantiated with domain knowlcdgc. when possible, Lo make thc111 concrclc. METARULE411 IF Tha datum is question is strongly associated with the current focus THEN Apply the related list of rules Trans: ((VAR ASKINGPARM)(DO~!nI:llilORD "triggers")(VAR CURFOCUS)) METARULE566 IF The datum in question makes the current focus more likely THEN Apply the related list of rules Trans: ((WAR ASKINGPARM) "makes" (VAR CURrilCUS) "more likely") I;igurc 4: Sample N~UMJ~CIN i!'lcl;lru!cs for the ‘l’~~Sl’-I IY I’O’I’CI I~SIS task A sample of the audit trail created in the course of a con:;ulLaLion is shown in I;igurc 5; this is ii snapshot of the ‘l‘I-XI‘-I IYl0I’~II:SIS task after question 14 in the consult,ltion excerpt. An cxnmplc of how the gcncral translations thus rclatc to the context of the consultation can be seen in the differing explanations for questions 4 and 14, both asked because an hypothesis was being tested. In order to gcncratc cxplan,ltions using an appropri&c voc:lbulary for the user, wc’bc idcntilicd gc11cr~1 words and phrases used in Lhc translations that have parallels in the vocabulary of the domain. At the start of a consultation, the user idcntifics himself as cithcr ;1 “domain” or “system” expert. Whcncvcr <I marked phrase is cncountcrcd while explaining the strntcgy, the corrcyponding dom,lin phr-asc is 5ubstitutcd for the mcdicnl cxpcrt. For cx,~rr~plc, “Lri:;gcrs” is rcplaccd by “is strongly ,lssoci,&d with” for the domain cxpcrt. j-EST-HYPOTHESIS .YrA TIC’ l’ROPlcRTIL:S TRANS: ((VERB decide) whether * has (VAR CURFOCUS)) TASK-TYPE : ITERATIVE TASKGOAL : EXPLORED FOCUS : CURFOCUS LOCALVARS : (RULELST) CALLED-BY : (METARULE393 METARULE400 METARULE171) TASK-PARENTS : (GROUP-AND-DIFFERENTIATE PURSUE-HYPOTHESIS) TASK-CHILDREN : (PROCESS-DATA) ACHIEVED-BY : (METARULE411 METARULE566 METARULE603) DO-AFTER : (METARULE332) AUDIT TRAIL FOCUS-PARM : (INFECTIOUS-PROCESS MENINGITIS VIRUS CHRONIC-MENINGITIS MYCOBACTERIUM-TB) CALLER : (METARULE393 METARULE400 ZETARULE METARULE171 METARULE171) HISTORY : [ (METARULE411 (( RULELST RULE423) (QUES 4 FEBRILE PATIENT-l RULE423))) (METARULE411 (( RULELST RULEOGO) (QUES 7 COfdVULSIONS PATIENT-l . RULEOGO))) (METARULE566 (( RULELST RULE525) METAR;;;Q;311 TBRISK PATIENT-1 RULE525)) (( RULELST RULE366) M~Q~~~L~~o~OCALSIGNS PATIE?dT-1 RULE366)) (( RULELST RULE309) M~Q~~~L;~o~OMPROMISED PATIEPIT-1 RULE309)) (( RULELsi RULEOOZ) (QUES 14 GRANULOMA-HX PATIENT-l RULEOOZ] Figure 5: Sample Task Properties IV LESSONS AND FUTURE WORK ‘I’hc implcmcntation of NEOMYCIN’s explanation system has shown us scvcral things. Wc’vc found that for a program to articulate gcncral principles, strategy should bc rcprescntcd explicitly and abstractly. They arc made explicit by means of a representation in which the control knowledge is explicit, that is, not embedded or implicit in the domain knowledge, such as in rule &USC ordering. In NEOMYCIN this is done by using mctaru1cs, ‘\n approach lirst suggcstcd by Davis (IhViS, 1976). ‘I’hc strntcgics arc nlaclc abstract by maling mctarulcs and tasks domain-indcpcndcnt. Wc’vc seen that it is possii:lc to direct a consultation using this gcncrnl problem-solving appro,lch and that resulting explanations arc, in fact, able to convey this su&gy. As far as the utility of explanations of strategy, trials show that, as one might expect, an understanding of domain lcvcl concepts is an important prcrcquisitc to appreciating strategic explanations. In regard to rcprcscntation issues, wc’vc found that if control is to bc assumed by the tasks c~nd mctarulcs, trll control must bc cncodcd in this way. Implicit actions in functions or hidden chaining in domain level rules lead to situations which do not fit into the overall task structure and cannot be adcquatciy cxpl‘lincd. This discovery recently encouraged us to implcmcnt two low-lcvcl functions as tasks and mctarulcs, namely MYCIN’s filnctions for acquiring new data and for applying rules. Not only do the resulting explanations reflect more accilr~;tC~y the aCLilal aCtiViticS of the system, they’re ako able to convey the purpose behind these actions more clearly. dclrrmining a rca:onablc lcvcl of detail for a given ujcr. WC also plan to experiment with sumnlari:arion, identifying the key aspects of a scgmcnt of a consultation or the cntirc session. WC might also explain why a mctarulc failed, why mctaru)cs arc ordered in a pnlticular way, and the justifications for the mctarulcs. An advantage of our abstract rcprcscnt,ition of the problem-solving stluctiirc is that when the b;lrnc proccdurc is applied in diffcrcnt situations, the sy5tcm is able to rccogni/c this f;nct. ‘I‘his givcc us the capability to product explanations by analogy, another area for future research. References Aikins, J. S. Prololypes and Production Rules: A Knowledge Representalion for Comyulcr Corr.wl(ations. PhD thesis, Stanford University, 1980. (S’I‘AN-CS-80-814). Bcnbassat, J. and Schiffmnnn, A. An Approach to Teaching the Introduction to Clinical Mcdicinc. ,In/lunls of I/z~c~/KI/ Ale&c&, April 1976, 84(d), 477-481. Rrown, J. S., Collins, A., Ilarris, G. Artificial Tntelligcncc and I,c,lrning Stmtcgics. In O’Ncil (editor), Z,emrir~ Srrategim . Acndcmic Press, New York, 1978. Clanccy, W.J. and Letsingcr, R. NIIOIIIYCIN: Reconfiguring a Rule- based Expert System for Application lo Teaching, in l’roccedings of he Seventh IJCAI, pages 829-836, 1981. Clancey, W.J. The Epistemology of a Rule-based Expert System: a Framework for Explanation. Artificial In/elligence, 1983, 20(3), 215-251. Clancey, W.J. The Advantagcr of Abstract Conlrol Knowledge in I?xperl System Design. ‘To appear in Proceedings of A/i/1/-83. Davis, I<. Applications of Afetn-level Knowledge to [he Cotlstruclion. Afainlenance and Use of I,arge Knowledge Bases. l’hl) thesis, Stanford University, July, 1976. (STI\N-CS-76552, HPP-76-7). Elstein, A. S., Shulman, L.S., and Spr,lfka, S.A. illcdical Problcjn Solving: An Analysis of Clinical Reasoning. Cam!)riJge, Massachusetts: Harv,ird University Press 1978. Shortliffe, E.H. Computer-based Atcdical Corlsullafions: AIYCIN. New York: Elscvier 1976. Swartout, W. R. Explaining and Juslij-ing Experl Corwrliitrg Programs, in Proceedings of (he Sevetlth IJCAZ, pages 815-822, 1981. Winograd, ‘r. Understanding Na/ural Language. New York: Academic Press 1972. There is still much that can bc do~w with NFlOMYCIN’s strategic explanations. WC mentioned that our current lcvcl of detail includes every taqk and mctnrulc. We’d like to develop discourse rules for 161
1983
7
266
A YARIATIONAL APPROACH TO EDGE DETECTION John Canny 998, MIT Artificial Intelligence Laboratory, 545 Technology Square, Cambridge, MA 02199 l’tlt: probIc:rn of dztc~ctin~ iri:,c:il::ity c!~arlges in images is canonical in \ isIon. lXi;e del,ectlou opc.1 :ltc,rs arc typ]Lally designed to optimally estimate first or second derival.ivr over some (usually small) support. Other criteria such as output signal to noise ratio or bandwidth have also been argued for. This paper describes an attempt to formulate a set of edge detection criteria that capture as directly as possible the desirable properties of the detector. Variational techniques are used to find 2 solution over the space of all possible functions. The first criterion is that the detector have low probability of error i.e. failing to mark edges or falsely marking non-edges. The second is that the marked points should be as close as possible to the centre of the true edge. The third criterion is that there should be low probability of more than one response to a single edge. The third criterion is claimed to be new, and it became necessary when an operator designed using the first two criteria was found to have excessive multiple responses. The edge model that will be considered here is 2 one-dlmensional step edge in white Gaussian noise although the same technique has been applied to an extended impulse or ridge profile. The result is a one dimensional operator that approximates the first derivative of a Gaussian. Its extension to two dimensions is also discussed. 1. Introduction Edge detection forms the first stage in a very large number of vision modules, and any edge detector should be formulated in the appropriate context. However, the requirements of many modules are similar and it seems as though it should be possible to design one edge detector that performs well in several contexts. The crucial first step in the design of such a detector should be the specification of a set of performance criteria that capture these requirements. Some previous formulations have chosen the first or second derivative as the appropriate quantity to characterize step edges, and have formed optimal estimates of this derivative over some support. Examples of first derivative operators are the operators of Roberts (1965) and Prewitt (1970), while Modestino and Fries (I 977) formed an optimal estimate of the two-dimensional Laplacian over a large support. Marr and Ilildreth (1980) suggested t.he Laplacian of a broad Gaussian since it optimizes the trade-off in localization and bandwidth. There is a second major class of formulations in which the image surface is approximated by a set of basis functions and the edge parameters are estimated from the modelied image surface. Examples of this technique include the work of Hue&e1 (1971) and Haralick (1982). These methods a.llow more direct estimates of edge properties such as position and orientation, but since the basis functions are not complete, the properties apply oniy to a projection of the actual image surface on to the subspace spanned by the basis functions. IIowever, the basis functjons are a major factor in operatcr performance, especially its ability to localize edges. Finally, none of the above methods considers the problem of nearby operators responding to the same edge. In this paper we begin with a frwditional model of a stop edge in white (;itUSSldIl nol:,c al~tl tiq' to fr!rlilul;ltc~ FIXTlsely 1.I:C ( ri?c,lls for effective fdgf: fhtff-t.wll. We ZSSllIIJP 1 hat dctfTtlfJn is I)t’rfC)rllled by rorlvolvlng the noisy edge with a spatial funr!,ion f(z) (which we are trying to find) and by marl,ing edgfas at the maxima in the output of this convolution. We then specify three performance criteria on the output of this detector. (i) Low p ro a lhty of error at each point. There should be 2 low b b’ probablity of failing to mark real edgo points, and low probability of falcely marking an edge point. Since both these probabilities are monotonically decreasing functions of the output signal to noise ratio, this criterion corresponds to maximizing signal to noise ratio. (ii) Good 1 oca lzation. The points marked as edges by the operat,or 1’ should be as close as possible to the centre of the true edge. (iii) Only one response to a single edge. This is implicitly captured in (i) since when two nearby operators respond to the same edge, one of them must be considered a false edge. However (i) ccnsiders a single output point and does not deal with the interaction between operators. The first result of this analysis for step edges is that (i) and (ii) are conflicting and that there is a trade-off or uncertainty principle between them. Broad operators have good signal to noise ratio but poor loca!ization and vice-versa. A simple choice of mathematical form for the locahzation criterion gives a product of a localization term and signal to noise ratio that is constant. Spatial scaling of the function j(x) will change the individual values of signal to noise ratio and localization but not their product,. Given the analytic form of a detection function, we can theoretically obtain arbitrarily good signal to noise ratio or localization from it by scaling, but not simultaneously. It can be shown that there is a single best shape for the function f which maximizes the product and that if we scale it to achieve some value of one of the criteria, it will simultaneously provide the maximum value for the other. To handle 2 wide variety of images, an edge detector needs to use several different widths of operator, and to combine them in a coherent way. l3y forming the criteria for edge detection as a set of functionals of the unknown operator f, we can use variational techniques to find the function that maximizes the criteria. The second result is that the criteria (i) and (ii) by themselves are inadequate to produce 2 useful edge detector. It seems that we can obtain maximal signal to noise ratio and arbztrarzhy good localization by using a diflerence of bo:ies operator. The difference of boxes was suggested by Itosenfeld and ‘I’hurston (1971) and was used by Herskovitz and Binford (1970), who first suggested a criterion similar to (i). If we look closely at the response of such an operator to 2 step edge we find that there is an output maximum close to the centre of the edge, but that there may be many others nearby. We have not achieved good localization because there is no way of telling which of the maxima is closest to the true edge. The addition of criterion (iii) gives an operator that has very low probability of giving more than one maximum in response to a single edge, and it ~Iso leads to a finite hmll, for the proglurt of localization md sipal to noise ratio. 54 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. The third result is an analytic form for the operator. It is the sum of four complex exponentials and can be approximated by the first derivative of a Gaussian. A numerical finite dimensional approximat,ion to this function was first found using a stochastic hill-climbing technique. This was done because it was much easier to write the multiple response criterion in deterministic form for a numerical optimization than as a functional of f. Specifically, the numerical optimizer provides candidate outputs for evaluation, and it is a simple matter to count the number of maxima in one of the outputs. To express this constraint analytically we need to fmd the expectation value of the number of maxima in the response to an edge, and to express this as a functional on f, which is much more difficult. The first derivative of a Gaussian has been suggested before (Macleod 1970). It. is also worth noting that in one dimension the maxima in the output of this first derivative operator correspond to zero-crossings in the output of a second derivative operator. ,411 further results are related to the extension of the operator to two (or more) dimensions. They can be summarized roughly by saying that the detector should be directional, and if the image permits, the more directional the better. The issue of non-directional (Laplacian) versus directional edge operators has been the topic of debate for some time, compare for example Marr (1976) with Marr and Hildreth (1980). T o summarize the argument presented here, a directional operator can be shown to have better localization than the Laplacian, signal to noise ratio is better, the computational effort required to compute the directional components is slight if sensible algorithms are used, and finally the problem of combining operators of several orientations is difficult but not intractable. It is for example much more diffcult to combine the outputs of operators of different sizes, since their supports will differ markedly. For a given operator width, both signal to noise ratio and localization improve as the !ength of the operator (parallel to the edge) increases, proAided of course that the edge does not deviate from a straight line. When the lma.ge does contain long approximately straight contours, highly directional operators are the best choice. This means several operators will be necessary to cover all possible edge orientations, and also that less directional operators will also be needed to deal with edges that are locally not straight. Following the analysis the author will outline some simple experiments which scrm to indicate that the human visual system is performing similar selections (at some computational level), or at least that the computation that it does perform has a similar set of goals. We find that adding noise to an image has the effect of producing a hlurring of the image detail, which is consistent with there being several operator sizes. Marc interestingly, the addition of noise may enable perception of changes at a large scale which, even though they were present in the original image, were difficult to perceive because of the presence of sharp edges. Our ability to perceive small fluctuations in edges that are approximately straight IS also reduced by the addition of noise, but the impression of a straight edge is not. 2. In One Dimension We consider first the one dimensional edge detection problem. The goal is to detect and mark step changes in a signal that contains additive white Gaussian noise. We assume that the signal is flat on both sides of the discontinuity, and that there are no other edges close enough to affect the output of the operator. The detection criterion is simple to express in terms of the signal to noise ratio in the operator output, i.e. the ratio of the output in response to the step input to the output in response to the noise only. The localization criterion is more difflcult, but a reasonable choice is the inverse of the distance between the true edge and the edge marked by the detector. For the distance measure we will use the standard deviation in the position of the maximum of the operator output. By using local maxima we are making what seems to be an a.rbitrary choice in the mapping from linear operator output to detector output. The choice is motivated by the need for a local predicate which allows the third criterion to be met. Under these assumptions, the signal to noise ratio C can be shown to be (Canny 1983) The problem of combining the different operator widths and orientations is approached in an analogous manner to the operator derivation. Vve begin with the same set of criteria and try to choose the operator that. gives good signai to noise ratio and best localization. We set a minimum acceptable error rate and then choose the smallest operator with greater signal to noise than the threshold determined by the error rate. In this way the global error rate is fixed while the localization of a particular edge will depend on the local image signal to noise rat,io. The problem of choosing the best operator from a set of directional operators is simpler, since only one or two will respond to an edge of a particular orientation. The problem of choosing between 2 long directional operator and a less directional one i!: tllc,.)rcticallq simple but difficult in practice. Ilighly directional optarator5 arc clearly preferable, but. they cannot be used for locally curved edges. It is necessary to associate a goodness of fit measure with each operator that indicates how well the image fits the model of a linearly extended step. When the edge ir: good enough the dlrectional operator output is used and the oul,put of less directional neighbours is supressed. Where A is the amplitude of the input step, and Q is the root mean squared noise amplitude. The deviation localization A, of the position defined as of the true the reciprocal edge, is given standard of the by A=A “0 f ‘(0) -- The derivation of the localization criterion assumes that the function f is antisymrnetric. This ensures that the expectation value for the output maximum is at the centre of the input edge. Ana!ysis of these equations shows that their product is independent of the amplitude of f and that the product is also independent of spatial scaling of f. llowever the individual terms do change, and if the width of f is increased by 20, signal to noise ratio L’ increases by fi while localization A is reduced by the factor fi. This is the uncertainty principle relating the two criteria. Another way to view this relationship is that we are simultaneously trying to estimate the amplitude and position of the input step, and that because of image noise we can accurately estimate one only at the expense of the other. Note also that both criteria improve as the signal to noise ratio of the image $; improves. 55 We then employ the Galculus of Variations Lo find the function .f which maximizes the product of these criteria. For simplicity, we cousitler a fixed width function, and assume that it is non-zero only iu the range j-2,2]. Also since the function is known to be aliti-syrnrnctric we consider only the range [0,2]. The function f over this range can bc shown to be Where a and CY are undetermined constants from the optimiza- tion. It turns out that both C and h increase with CY, and that the localization improves without bound. The function f tends to a difference of boxes as CY tends to infinity. The third frame of Figure 1 shows this operator and the fourth fra.me is its response to the noisy edge. The analysis of signal to noise ratio so far has been concerned only with the probability of marking edges when near the centre of an input edge. and has not attempted to reduce the probability of marking edges given that a neighbouring operator has marked an edge. We can reduce the probability of marking multiple edges by constraining the distance between a.djacent maxima in the response of the operator to noise. This distance is given by xmaz,ma = 27T s:z f”(x) dx 3 I_‘,” f”“(x) dx > When this constraint is added to the original two criteria, the solution of the variational problem becomes (in the range [O,l]) f(x) I= al exp((Yx)cos(wZ + Or) + o2ex;3(---crx)cos(wx + 02) - 2 When the values of the constants al, uz, 01 and 9s are solved for, we find that the above function can be approximated by the first derivative of a Gaussian, shown in the fifth frame of figure 1. It closely resembles Macleod’s (1970) operator, and at least in one dimension it is equivalent to the second derivative zero-crossing operator of Marr and Hildreth (1980). However in two dimensions, the similarity to the zero-crossing operator ends. Figure 1. Responses operators to a noisy step Of difference Of, boxes and first derivative Of 3. Two or More Dimensions The step edge is the one dimensional projection of the boundary between two image regions of dlfl’ering intensity. In two dimensions it is a directional entity, and the direction of the edge contour will be normal to the direction of principal slope at the edge. The direction of principal slope gives us a coordinate to project the two dimensional edge in to a one dimensional stel). In principal the projection can be done from any number of dimensions. The projection should use a smooth projection function, since this minimizes the rate at which the detected edge “zigzags” about its mean value. The simplest implc~mentation of till> cl-tcrtor uses a projection function that is a Caussiarl o! tllcb s;Ltric‘ ~~jldtll a~ tile d[:crctiorl fllllf til)rl. ‘l’lris grc~atly sirnp1lfic.s the cor~ipiit~l~on for 1 hc q~wator, nud erlables iL to be C?X~Jressed as the composition of convolution with a symmetric two-dimensional Gaussian followed by the application of the following non-linear differential predicate -$G*l= 0 L where n = ‘?(:*I ,vq, G is a symmetric two-dimensional Gaussian, and I is the ima.ge This operator actually locates either maxima or minima, by locating the zero-crossings in the second derivative in the edge direction. This is the derivative of the output of the operator in the projected coordinate. In practice only maxima in magnitude are used, since only these correspond to step edges. This form is readily extensible to higher dimensions, and the convolution with an n-dimensional Gaussian is highly efficient because the Gaussian ic decomposable into n linear filters. Second directional derivative Y zero-crossings have been proposed by Haralick (1982). It is worthwhile here to compare the performance of this kind of directional second derivative operator with the Laplacian. First we note that the two-dimensional Laplacian can be decomposed into components of second derivative in two arbitrary orthogonal directions. If we choose to take one of the derivatives in the direction of principal gradient, we find that the operator output will contain one contribution that is essentially the same as the operator described above, and also one from a detect,or that is aligned along the direction of the edge contour. This second component contributes nothing to localization or detection, but increases the output noise by 60%. At best we can expect the Laplacian of Gaussian to be worse by 60% in localization and detection than the simplest form of the detector. This has been verified in experiments. We can gain considerable improvements in C and A by extending the projection function. However, a simple decomposition into a convolution and a differential operator is no longer possible. Instead several operators of fixed orientations are needed. The more the operator is extended along the contour direction, the more directional it becomes and the more operators are needed at each point to cover all possible edge orientations. The detector has been implemented using masks with 6 and 8 discrete orientations. The results are very promising. The implementation still uses a decomposition strategy to obtain computational eficiency. The image is first convolved with a symmetric Gaussian, and x and y components of gradient are found. Then the various orient.ed masks are obtained by convolving the slope components with sparse linear masks in the mask direction. Typically there are only 7 multiplications per mask per point. A complete implementation of the detector using masks of differing widths and lengt,hs exista, and the heuristics for combining operator outputs are being refined. One of the most interesting 56 questions involves the combination of the outputs of operators of different widths, since these will respond to intensity changes at different scales. While in most cases it is desirable to choose one operator Lo represent the intensity change at a particular point in an image, there are cases where changes are superimposed, and two or more edges need to be marked at the point. In particular, this is necessary whenever shading edges occur over surface detail. 4. A Simple Demonstration The similarity of the detect& scheme described above with the human visual system was never a motivating factor in the design of the algorithm. But once the algorithm has been formulated, it is interesting to inquire whether there are similarities. One of the results of the analysis is that small operators are always to be prefered when their signal tc noise ratio is high enough, and that larger operators are progressively chosen as the signal to noise ratio in the image decreases. We might ask if a similar switching occurs in the human visual system. A well-known perceptual anomaly involves a coarse!y quantized picture of a human face. Since the image is sampled very coarsely, the relevant perceptual information is at the low spatial frequencies. However the quantization adds sharp edges at high spat,ial frequencies which tend to overide the coarse information. The high frequency detail can be reduced by low-pass filtering, i.e. blurring of the image. In the present scheme the transition from narrow to broad filters can also be accomplished by reducing the image signal to noise ratio. Figure 2 shows a series of coarsely quantized images of a human face. The only difference between the images is that they contain increasing amounts of Gaussian noise. Remarkably, the addition of noise makes the later images easier to identify as a human face. Fig 3 shows a series of liner with sinusoidally varying direction. The latter image (which is the same set of lines with noise added) is perceived by many as containing straight lines. The second effect is a less convincing test of the hypothesis that the human visual system gives prefence to highly directional operators. The lines are closely spaced so that there will be improvement in signal to noise ratio with operator width only while the operator widlh is smaller than the projected distance between the lines on the retina. So in this context, improvements in signal to noise ratio ca,n only be had by using more directional operators. The lines are approximately straight so that the directional operators will have sufficient “quality”. Thus we would expect the highly directional masks (which are less sensitive to rapid changes in edge orientation) to be prefered, giving the impression of straight lines. An example of the output from two operators of different size on a textured irnage is shown in figure 4. The image contains both sharp intensity changes due to material boundaries and slow changes due to reflectance variation. The two operators differ by a factor of four in width, and each one has equal width and length. 5. Conclusions This paper has outlined the design of an edge detector from an initial set of goals related to its performance. The goals were carefully chosen with minimal assumptions about the form of an optimal edge cperator. The constraints imposed were that WC would mark edges at the maxima in the output of a linear shift-invariant operator. By expressing the criteria as functionals on the impulse response of the edge detection operator, we were able to optimize over a large solution space, without imposing constraint on the form of the solution. Using this technique with an initial model of a step edge in white Caussian noise, we found that there wa.s a fundamental limit to the simultaneous detection and localization of step edges. This led to a natural uncertainty relationship between the localizing and detecting abilities of the edge detector. This relationship in turn led to a powerful constraint on the solution, i e. that there is a class of optimal operator6 all of which can be obtained from a single operator by spatial scaling. By varying the width of this operator it is possible to vary the trade-off in signal to noise ratio versus localization, at the same time ensuring that for any value of one of the quantities, the other will be maximized. The technique has also been applied to the derivation of optimal detectors for other types of feature, specifically for “roof” and “ridge” edges (Iierskovits and Biuford 1970). A ridge detector has been implement,ed is being tested on images of printed text. It is also possible to apply the technique to non-white Gaussian noise models (Canny 1983). This ran be done by deriving a “whitening” filter for the noise and designing an optimal detector for the feature which results from the application of the whitening filter to the actual feature. 6. References Canny J. F. “l~inding Edges and Lines in Images”, S.M. Thesis, Department of l)ctpt. of Electrical Kngincering and Computrr Science, M.I.T., Cambridge Mass., 1983. Hara!ick R. M “Zero-crossing of Second Directional Derivative Edge Operator,” S.P.I.E. Proceedings on Robot Vision, Arlington Virginia, 1982. Herskovits A. and Binford T. “On Boundary Detection,” M.I.T. Artificial Intelligence Laboratory, Cambridge Mass., AI Memo 183, 1970. Hueckel M. II. “An Operator Which Locates Edges in Digitized Pictures,” J/I&W 18, No. 1 (1971), 113-125. Macleod I. D. G. “On Finding Structure in Pictures,” Picture Language ~1cchznes S. Kancff ed. Academic Press, New York, pp 231, 1970. Marr D. C. “Early Processing of Visual Information,” PM. Trans. R. Sot. Land. B 2’15 (1976), 483-524. Marr D. C. and Iiildreth E “Theory of Edge Detection,” Proc. R. Sot. Lond. J3 207 (1980), 187-217. Modeetino J. W. and Fries R. W. “Edge Detection in Noisy Images Using Recursive Digital Filtermg,” Computer Graphics and Image Processang 6 (1977), 409433. Prewitt J. M. S. “Object Euhancement and Extraction,” Pactuw Processzng and Psychopactoracs 1~. Lipkin &Z X. Rosenfeld Eds, Academic Press, New York, pp 75-149, 1970. Roberta L. G. “Machine Perception of 3-Dimensional Solids,” Optical and Electra-Optacal Informataon P7ocessan.g J. Tippett, D. Berkowitz, L. Clapp, C. Koester, A. Vanderbergh Eds, M.I.T. Press, Cambridge, pp 159-197, 1965. RoRenfr,ld A. and Thurston M “Edge and Curve Detection for Visual Scene Analysis,” IlXE T Tans. Computers C-20, No. 5 (t971), 562-569. Figure 2. Coarsely sampled image of a human face with varying amounts of additive Gaussian noise Figure 4. Outputs from t.wo operators on a textured image Figure 3. Parallel lines with sinusiodal variation in direction and additive Gaussian noise 58
1983
70
267
A DESIGN METHOD FOR RELAXATION LABELING APPLICATIONS Robert A. Hummel Courant Institute, New York University ABSTRACT specified by a set of weighted preEerences, given by support functions si(R;p), which are functions of the labeling assignment 5. A separate function is given for each label at each object. The support si(R,F) can be positive or negative, and denotes the support which the current mix of weighted assignments lend to the prtiposition that object ai is label XX. In nearly all applications to date, the support functions are linear in 6: A summary of mathematical results developing a theory of consistency in ambiguous labelings is presented. This theory allows the relaxation labeling algorithm, introduced in [Rosenfeld, Hummel, Zucker, 19761, to be interpreted as a method for finding consistent labelings, and allows specific applications to be tailored in accordance with intended design goals. We discuss, with a couple of examples, a design methodology for using this theory for practical applications. si(R ;p> = z 1 j R' rij(R,R'> Pj("')l and depend principally on assignments objects a. PjCR') at J near object ai. It is of course essential to know where the support functions come from. In first stating the theory, it is enough to suppose that the si(!2;p)'s are "God-given". However, the principal task confronting the designer of a relaxation labeling applications is the definition of formulae for computing support functions. In Section III, we show how the distinction between consistent and inconsistent labelings can be used to constrain the choice of support functions. I FOUNDATIONS OF RELAXATION LABELING We begin by presenting a succinct summary of the theory developed in [Hummel and Zucker, 19831. For details and mathematical proofs, the reader is referred to the references. Let denote distinct objects, and tx l,...,XdLb e a s:t of possible labels. The goal is to assign one label to each object. In most We must first make the distinction precise. BY using support functions, we can extend the usual notion Of consistency (as defined, for example, in [Mackworth, 19771 and [Hardlick and Shapiro 19791) to reflect a quantitative system of preferences. We state here the definition for unambiguous labelings, and refer the reader to [Hummel and Zucker, 19831 for a discussion and the weighted labeling assignment version. practical s ys terns, measurements are made to describe each object a; , and a most probable label is assigned inde;endent of the - labels Relaxation assigned to - neighboring objects. iabeling is an iterative method for incorporating context and local consistency in weighted labeling assignments. In this model, a nonnegative weight P*(R) -0 Ject tc is assigned for each label X, at every normalized the N conditions ~~=l Pin = 1, i = 1, . . ..n. The weighted labeling assignment pi(R) denotes a confidence level for the assignment of label X, at object ax. Suppose p E K*, which is to say that D assigns to each object ai a single label XI1., signified by pi(Ri) = 1. The unambiguous 1abeliAg p' is consistent iff -- 'iCR iiT;> 2 si(R;P) for all R and i. The concatenation- of the weightned assignme& values comrise the assignment vector (5 l,***,Pn),'iii = (Pi(l), -v F L (m)). The space of possible assignment l **$Pi vectors i5 is the assignment space K. An unambiguous assignment p E K is an That is, at each object the maximum support value is attained for the label actually assigned to the object. assignment all and R. vector satisfying p,(R) = 0 or 1 for Because of th; normalization condition, an unambigous assignment gives a weight of 1 to exactly one label at each object. We The relaxation labeling algorithm, defined precisely in [Hummel and Zucker, 19831, is an iterative process for updating initial weighted labeling assignments in the assignment vector p" to achieve a consis tent assignment. Heuristically, the idea is to continuously increase pi(R) if ~~(2;;) is positve, and to denote the set of all unambiguous assignments by K*, and note that K is the simplex formed convex hull of K*. by the To apply the relaxation labeling method, constraints between neighboring labels are From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. decrease pi(g) if si(R;f;) is negative, subject t-0 the constraint that i; space K. On the kth remain in the assignment iteration of the algorithm, the current assignment vector Fk is updated by first computing the updating vector ck E whose components oqmm, -k are given by q;(X) = si(R;p ), then projecting q onto the set to the space K at the point uk Fk tangent to yield a tangent direction and direction iik' by setting i; f ina 1 lg+l st;gpyfzk, izhe rL"," is a sufficiently small positive constant. This process is repeated until convergence. The projection operator required to obtain Ek from qk is described in [Mohammed, Hummel, Zucker, 19831. The formulation of relaxation labeling outlined above leads to a number of theorems, proved in [Hummei and Zucker, 19831. Two results of particular relevance here: I. If the relaxation labeling process stops at p, then '; is a consistent labeling. II. A strictly consistent unambiguous labeling (Si(Ri;i;) > si(!Z;i;) for R # Ri , all i) is a local attractor of the relaxation labeling algorithm. II DESIGNING SUPPORT FUNCTIONS Suppose that objects and label sets have been identified, and that formulae for support functions are required. The following method is suggested. Certain patterns of unambiguous labelings can be identified as consistent labelings. Suppose, for example, that an unambiguous assignment i -t R. , denoted succinctly by R' = (Rl...R,), is to be vtewed as consistent. Let F be the corresponding unambiguous assignment vector. Then we want the inequalities Si(Ri;Ij) 2 Si(R;p) t0 be satisfied. If the support functions are to be designed as linear functions of I;, these conditions can be written as z rij(Ri,R j) 2 1 rij(R,Rj), all 89 i. j j Suppose that Q1,...,rN are N distinct patterns of labelings which are deemed to be consistent. Then we want z ri j(%y,Rr) a z rij(R ,R!j) . to be satisfied for all R, ill i, for k = l,...,N. These conditions may constitute a large number of linear inequalities in the set of variables rij(E ,R '1. The system of inequalities may have no nontrivial solution, in which case it is impossible to design linear support functions with Q1 P".S IcN as consistent labeling patterns. However, if the system has a nonempty nontrivial solution set, then any assignment of values to the rij(R 3% '1 's satisfying the inequalities is called a feasible solution. In this case, linear programming methods (such as the simplex method) can be used to find feasible solutions. If the coefficients are chosen from the interior of the set of feasible solutions, then the solution is called strictly feasible, and Result II from the previous section can be used to show the given patterns i?'...EN will then correspond to unambiguous labelings which are local attractors of the relaxation labeling process. It may well happen that a strictly feasible solution for the r. .(R,R')'s will yield other consistent unambiguou~Jlabelings not represented in the design pattern set. This may be undesirable, and will require a search for a feasible solution which minimizes the problem of spurious consistent labelings. The second example given below illustrates a method for accomplishing this search. IV EXAMPLES Suppose that the graph of objects is given by a hexagonal grid, so that each object is equidistant from its six neighbors. Consider the simplest case of two labels. We suppose that the following local patterns are consistent: 11 22 12 21 111 222 112 221. 11 22 11 22 Further, we assume that the relationship of consistency is "isotropic," that is, a rotation of a consistent labeling is consistent. Since the labels 1 and 2 are treated symmetrically, we can also assume that rij(l,l) = rij(2,2) and rij(1,2) = rij(%r;i);heFirna:ly, we assume that the rii s are zero, , . s are independent of 1 and J as long as i and iJare distinct neighboring objects. Thus only two parameters are sought: a = rilf131) and b = r-.(1,2), for i and j distinct neig bors. Applying t e conditions for strict consistency to fiJ each of the patterns listed above leads to the single condition a > b. What other unambiguous local patterns are consistent in this scheme? It is not hard to show that (for a > b) a local pattern with a central object labeled "1" is strictly consistent if the number of the six neighbors with a "1" label, n is greater than the number of neighbors with a (1 1' " label, n2. Similarly, "2" is consistent for the central object if n2 > nl. A global unambiguous labeling will be consistent if every object is labeled with the majority label as voted by the six neighbors. Since this condition holds at every object, strictly consistent labelings consist of strips of l's and 2's with straight parallel interfaces between the regions. We next present a slightly more complicated example. However, practical situations will generally be much more complex than either of our examples. This time, consider a hexagonal array of objects with three labels. The labels "1" and “2” are regarded as "region types", and label "3" denotes "edge between l's and 2's." A pattern of constant l's or 2's, and a region of l's separated from 2's by a line of 3's are consistent labelings: 11 2 2 3 2 111 222 132. 11 2 2 13 We must also regard as consistent. As before we treat symmetrically, and assume isotropy. labels 1 and 2 the Five parameters arise: a = r(l,l) = r(2,2), b = r(1,2) = r(2,1), c = r(3,3), d = r(1,3) = r(2,3), and e = r(3,l) = r(3,2). Here i and j are suppressed since objects i and j can be only distinct pairs of neighbors. Note also that the r's are not necessarily symmetric -- if d # e, then regions can influence borders differently than borders influence regions. Each consistent pattern yields two inequalities to constlrain the five parameters. From the constant patterns, we deduce that a > b and a > e. From the pattern with a central line of 3’s, 4e + 2c > 2a + 2b + 2d, and from the remaining patterrl we obtain 2a + d > 2e + c. Combining, we have a > b, a > e, and (a-e) + (b-e) < c-d < 2(a-e). It is easy to see that feasible solutions of these equations exist and can be readily constructed. In particular, choose any positive value for a', then choose b' < a', and e arbitrary. Set a = a' + e, b = b' + e, and finally choose a value for c - d between a' + b' and 2a'. The values for c and d can be selected so that the difference is the specified value. The designated patterns, and all their rotations, will he strictly consistent under the compatibilities of any feasible solution. We will now try to select a feasible solution which gives rise to as few other consistent labeling patterns as possible. Let nk denote the number of neighbors of a central point of a hexagonal cell having label '1 I' k , for k = 1,2, or 3. The label "1" at the central object is part of a consistent labeling if its support, an1 + bn2 + dn3 , is greater than the support for the labels "2" and "3", i.e., dnl + an2 + dn3 and enl + en2 + cn3. From the two inequalities, we deduce that and the fact that n1 + n2 + n3 = 6, "1" is consistent if nl > n2 and [(c-d)+(b-e>]n3 < (a-b)nl+ 6(b-e). [(c-d)+(b-e)]n3 > (a-b)*max(nl,n2) + 6(b-e). Let us arbitrarily choose a = 1 and b = -1. Then e = 0 makes sense since "3" and "1" can co-occur. Having chosen a, b, and e, then 0 < c-d < 2. Suppose we choose c-d = 1. Then applying the conditions above, a central "1" is consistent if nl > n2 and nl > 3. for a "3" to be consistent, it suffices tzl:ave ma;g:l,;;ie: 3: Thus a pattern with a or In the center is consistent if there are four c)r more of the same label type in the neighborhood. A central "3" is consistent as long as there are two or fewer "1"'s and two or fewer "2"'s in the neighborhood. This seems pattern reasonable. However, note that 12 1x2 11 is consistent with x = 1 under the above choice of values. We would prefer the support for x = 3 to be higher than the support for x = 1 to justify the interpretation of label "3" as "edge". Thus we would like 6e > 4a + 2b. Having chosen a = 1, b = -1, we now see that e > l/3 is desirable. Let e = 213, whence -213 < c-d < 4/3. By varying the value of the one parameter c-d, different behavior of the patterns of consistency can be selected. For example, suppose we select c-d = l/2. Then a case analysis yields: A label "1" is consistent only if there are four or more "1" labels among the six neighbors, and no "2" labels; A "3" label is consistent if there are three or I' 'I more 3 labels among the neighbors, or if there are four of label type "1" or "2", and at least one of the other type. This example illustrates how an initial assignment of values to the r. .(R,R')'s obtained as a feasible solution can be :Jfined to give more desirable behavior by the addition of a constraint. In this case, the additional inequality arises when we decide to reject a spurious consistent pattern, and the statement that a particular label hsould have greater support than the otherwise consistent label. [II [21 Haralick, R. M., and L. Shapiro, "The consistent labeling p roblem: part I," REFERENCES IEEE - PAM1 I (1979), po 173. -- Humme 1, R. A. and S. W. Zuck er, "On the founda tions of relaxation labeling processes," IEEE - PAM1 - z(l983) pi 267. [3] Mohammed, J. L., R. A. Hummel, and S. W. Zucker, "A gradient projection algorithm Similarly, “3” is a consistent label central object of a local pattern if for the 170 for relaxation methods," IEEE - PAMI - 2, - - 1983, p. 330. [4] Mackworth, A. K., vConsistency in networks of relations," Artificial Intelligence 8 (1977), p. 99. [S] Rosenfeld, A. R., R. A. Hummel, and S. 14. Zucker, "Scene labeling by relaxation operations," IEEE - Trans. on Systems, Man, Cybernetics 6 (19m. 420. -- - 171
1983
71
268
John R. Kender Department of Computer Science, Columbia University New York, NY 10027 Abstract This pa.per demonstrates how image features of linear extent (lengths and spacings) image-independent constraints on un erlying surface cf encrate nearly orientations. General constraints are derived from the shape-from-texture paradigm; then. certain special cases are shown to be especially useful. Under orthography, the assumption that two extents are equal is shown to be identical to the assumption that an ima angle (i.e. orthogra hit e extent is a s e angle is a right skewed symmetry . are assumed equa 1 nder and para Ii erspective, orm of slope or if image extents into slope. In the * eneral cl, extent again de enerates constraints are usua y fi perspective case, t i! e shape complex fourth-order equationas, but they often simplify--even to graphic constructions m the image space itself. assumed e If image est ents are colinear and order, wit ual, the constraint equations reduce to second ‘fi several graphic analogs. If extents are adjacent as well, the e uations are first order and the derived construction 4 the “‘acli-knife particularly straightforward an d method”) is general. This method works not only on measures of extent per texel, but also on reciprocal measures: texels per extent. Several examples and discussion indicate that the methods are robust, derivin search, where ot B surface informat ion cheaply, without er methods must fail.* 1 Introduction or exis & 1 raph tha.t is easily integrable with‘ that of other ing a gorithms. Linear extents are measurements along a straight image line of either objects (in which case they are lengths) or virtual objects (in which case they are spacings). The exact form of the inpllt to these analyses can vary. A prior edge-detection and linking step, or a segmentation-like step is assumed. Lengths are then linear measures of image tokens such as elong:atcd blobs, and spacings are linear measures of the vlrfual lines betlvcen image tokens. Spacing bch:~vcs the same way as length does; often it is more conveniently available. In general, this paper follows the image understandmg conventions presented in, among other places, [Kender 8031. That is, the image coordinate system considers the z axis to be positive in the direction of view;.the image itself to be plan0 z=I, which has been rotated In front of the lens at the ori in; and the unit of len th t in the system to equal the foca length of the lens. 7 E3;3c;~ i;,tdhe scene are locally rypresc$edthSy play;;; z= x-&y+c the surface gratllcnt s is* re resented ient, in the gra % by the point (p,q , its f gra lent space. The problem of derivin,. 0- surface informst ion from textural and regularity assumptions proceeds in two stc>ps. First. the textural element--in this case an im;ie;e cstcnt-- is backprojected onto all surface patches ,yossibl&. A map I the “normalized texture property map he scenic measure of the comnoncnt vk. or NTI’M) of the surface’s parameters is recorded. The r&overed scene extents arc usually a function of the image exfent’s position and the surfaee’s gradient. In the second step, two or more nearby textural elements are assumed to be equal in measure in the scene. Mathematical1 this means that the ma r s can be intersected to find t x bse surface patch f arame ers that generate for each tcxel the same measure that is, the same texture). 2 Extents under Orthography For the cases of spatial extent under orthogra consider an image with two cxtcntq in it. hy, suppose hey f arise from parallel extents in in the scene* this parallelism is carried over into the image. extent can be translated Under ortilography, cithcr mto supcrimposltion on the other. Thus, if they are of equal tlxtent, they and their NTI’hls 1 exactly coincide and no further \ in ormation about t IC scene is obtained. If they are unequal in extent, they will superimpose with unequal overlap; there can be no surface that makes them equal. Thus parallelism of equal extents under orthography provides no information about surfaces, cxccpt in this weak, negative fashion. Now suppose the extents arise from non-parallel equal extents in the scene. This situation is more interestin : B the image extents can IX translated so that a pair of t lcir ends will meet. and form an angle. The resulting constraint equation is a messy one in terms of their lcn and q. I owever, it is not P ths, their joint an Ic, 5 and second powers of p ifficult to prove that, the constraint on surface orientation that it induces can be s raphed as a hyperbola in the gradient space. The allowing construction shows that the hyperbola is the &made hyperbola [Iicnder 8Ob], ivhich usually arises under the assumption that a given image angle is caused by a scene right angle. (:onsidcr Fi B urc 1, in which the two image extents forming their ang c have been closc~l off with the addition of a line. It is well known that orthogra hv *tY u If reserves midpoints of lines; thus the image fi,<-ure, WI 11 ye another line connecting the vertex to this IHI( point, can be seen as $1 a scene isosccles triangle in pcrspcct i\r+. From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Given this, the angle formed b the altitude to the base in the scene must be a right ang c: this is the Kanadc P nssumpt ion. The surface constraint then is identicallv d(~rivctl thcref~,rc~ Ecual cxttlnts in the image under ortho eit lcr \ raphjr give trivial results, or reduce to a ready 5 known cases of image slope and angle. Figure 1: Equal extent is skewed symmetry. 3 Extents under Perspective imngc>, their pairs of sides ~311 be t~xtcndcd to derive tlvo vanishing points. Each vanishing point implic3 a linear constraint in the gradient space: if an ima a vanishing r oint of a surface, then the sur act must s e point (x y) is i ~nvc a gradient p,q) which sntisfles. px+qy=l ( [Shafer. 831). z;T suct;ci;ne?r constraint: uniquc~l .Y . in turn uniquely B define a vanishing efmes the surface orient ation. 3.2 Equal and Colinear Assume now that the image extents did not arise from par:ill<~l scene extents. other sim lifying There seems to be only one r set of cases: those when the scene componcn s are colincar. Interestingly, these cases do not reduce the problem to one of image slopes again, as colincar extents have only one slope held in common. The imaEes of colinear scene components are also colincar. The”rcverse is not true, thou’gh the heuristic positing of that truth often is most useful. It would be yet another preference heuristic, similar to those used in other contexts in image understanding: nearb image pixels arise for example, neigh l ors (sha from actual scene patch f e from shading , nearly ri from scene rig it an 1 ht angles arise s arise from parallels $ lcs (skcwec one form of s iI mmctry , ‘i near-parallels ape from texture), etc. The ima reduces to the B e configuration in the most general case ollowing. image line at hei -lit y’ Four points lie on the horizontal D defined similar y). (i they are A=(a,y) pith I3, (2, and extents, . I,=( b-a) ?‘hese four points defme two lrn+fe and R=(d-c , rcsrectivcl . 7 ie assumption of collnearily allows the NTI his of t le extents to be put intp corr-espondrnce easily: they are already in the pro er orlentstlon, due to the one shared image slope. Since tie 7 P the NT& also share identical terms in S(p,y), equatmg vields a surface constraint that reduces to second order-in p and q: (I-pa-qy)(l-pb-su)/L = (l-pc-(lu)(l-pd-~ly)/R The lencth of the induced surface extent is calculated in t?lc scene by the usual Euclidean metric, yielding a complex NTPM: Although this equation can bc exactly solved, it has a simplifying graphic construction that can be drawn in the image space itself dircctlv yielding the vanishing point(s). Rewrite it in the folloGng form: (X-a)(X-b)/L = (X-c)(X-d)/R, where X = (I-qy)/p Theoretically,. this function as usable in its raw form. That is, given two extents in an imaae under ccnf ml pers 3&tiVe, en&ate the appropriate 64 it is possible to TPMs for both (sub’ect to t leir r’ 5 position and orientation), alIt to intersect t leir graphs, as if they wert’ 1Iough accumulator arrays. The result, would be a small sf.4 of surface 0Cientat ions which would simult :~ncol~sly normalize the two induced surface extents to ccj~ial measure. However, in nearly all cases, this invol\,rs the solutions to constraint equations that are of fourth order in p and q. Only a few ima e configurations gcncratc simpler surface constraints. +h e ones that do simplify have the added benefit that they appear to be relatively common. 3.1 Equal and Parallel First assume that image estents arose from scene components that were not only ccual in measure, but were 1 ,nrallcl on the scene surface. A will s low simple construction that once again the image conflguration can be hnntllct1 solely bv collsitlerations of ims e slope. Two equal and pnrnllcl scene lines form a para Ii elogram; in the If X satisfies the constraint ec uation then scene extents are e ual 1 a.s desired. Furt er fi desirable X: i th’iq is a very a&o satisfies the forma! definition that Ei\;+qy= 1, that is, the point (X,y) is a vanishing oint. ote that (X y) 1 les on the line of colinearity; must be calculated is t,he value of S itself. al I” that the equation Formal1 r, is of the form of the intersection o 8 two parabolae. The left parabola has value 0 at both a and b, and a minimum value of L/4 midway between them. The right parabola is exactly of the s;lmc shape, except for sealing (its midpoint minimum is R/4 1 . Thus, the value of X can be EraDhicallv determinct by drawing the P arabolae Notice on “th6 imaie, and finding their intersection. that the mathematics, as ~~~11 as the construction, inds a vanishing point between b and c, where the image lengths are on opposite sides of any vanishing line.) The parabola metshod can be refined in the followin way. The parabolae are on1 f constrained to pass throug 1 $ the point pairs; their exact s lape is not critical, as long as the parabolae are similar (i.e. scaled . i they can be mutually Further, since the value of x is a purely formal one, t ie pnrabolac can 1)~ imaginc>tl to be tlrn\vn off of the image plant: that is, either par:ll)ola can be though of as cstcnding into the -z axis direction. R’lore a propriately, f the value of X on either parabola can )e considered as an image feature in its own right. The calculation is really a type of local feature assignment, with with each position on the line of colinearity being assi ned two simultaneous features. That position where f he features are identical is the vanishing point. Parabolae compensated for row very quickly, however. This can be 9 uare root of this image feature. ormally by taking the s The assignment of va ues ? is now via hyperbolae of similar sha linearly. They also have t i! e, which. grow a proximat.ely e aesthetlc advan age of being F undefined within the ima e interior of which being one p ace where a vanishing point 5 extents themselves, the ought not be. In a pinch, the hyperbolae can also be approximated by their asymptotes, which, being strlctl linear are easier to compute. For example, the hyperbola is scrt((X-a)(X-b)/L ; its asymptotes le t r at the left texe Is s midpoint, an d ori ina?; -sqrt(L) (see Figure .a). have slopes of sqrt( 6 Still oth$r modificat:ions and approsim;i;Ts of this formal equatlT;r are posslblc; they would to be analyzed accuracy and computational efficiency. Figure 2: The hyperbola and asymptote methods. 3.3 Equal, Colinear, and Adjacent The last special most powerful. case is the simplest, but perhaps the Suppose that two colinear and a~~ncent image extents are derived from two colinear, adjacent, and equal scene components. That is, as in Figure 3, the F oints B and C have merged. Then t,he constraint given or the general four-point colinear case simplics evrn further since B=C, to that of a linear constraint in p and '4: (l-pa-qy)/L = (I-pd-qy)/R L3y the same formal rewritten as: method as above, it can be (X-a)/L = (X-d)/R, where X = (I-qy)/p l:it hclr side is the equation of a line. With exactly the MIHC flexibilities of the parabola scheme above, these lines can bc plotted in the image s jace (see Figure 3). ‘l’h:~t is, they can extend out of t ie f direct ioll; image in the -z they can be mutually scaled; X can again be consitlc~r~~tl an image feature, labeling each position on the lill(l of colillcb:lrity. \villl ;I two-tuplC> of features. fZs l)cforC, ttlr v:ini\hirlg polnf occurs when t Ire fctiturcs arc equal; this occi~rs al X=(1,(1-rra)/(I,-r~). vanishing point Figure 3: “Jack-knife” method for vanishing points. really does implement an image feature: it is sealen inverse depth. These methods are formal; :ts with the parabola method, other modifications of the constraint equation arc possible as well. equation can It should be notr>d fhat the jack-knife also be derived from the ap hcntion of methods of projective geometry: tit her t hroljg 1 t hc cross- Y ratio, or t,hrou h construction, + the appropriate nine-point geometric he parabola method appnrcntly cannot, however, as it deals with five points at a time. 3.4 A Reciprocal Method The jack-knife method has 311 interestin clxt(bnsion. The primary heuristic assumption rc>cluired 5 or its use rc~quircs only that image oxtcrits :iris;c from oqu:>l surface extents; however, what is meant by oxtent can be tlofint~d in rriany ways. In particular, a series of N cxtcnts laid colincarly end to end on a surface can bc con~itlcrcd either as a one extent of length N or N of 1~11 Ott (‘ I1 T ,th one’ (or many other combinations . 1 runs o extents can be obtainct bv lookin niult iple distinguishing events along nri ftrbit rary 7 for rtlpc9t otl inc through tlira imngc. (Strong cdgcs of the s;lmc polarity, say.) The jack-knife method, as given, would try to normalize the extent of the cntirc run. Ijut under the assumption that the events form ;I tc5ture, 1h(b iric~thod can be extcndcd to normolizc* etrcll ewut as \vclI. It tlocs this by simpl i dividing norm:llizctl run clxtclnt hy c>vcnt count to get t e “av(~rage” tkstcnt of a unit rY(‘nf: Thus, given a run of eveills, thtl estcndc~tl mct hod tiivlclc5 t Ircb 189 (( I-pa-qy)/L)/l==(( I-pd-qy)/R)/r; 1 and r are event COllI~tS Note that the run can be s that the modified equation can. E: lit in many places, and grn )hic e solved by any of the an d techniques given II appropriately inrnkyi a;k-knife method (with I, fl to L/l and R/r respectively.) ? he optimal ways to split the run would have to be analyzed. The ‘ack-knife method is based on a measure of cx t en t-per- i cxel; this reciprocal method uses texels-per- extent.* The reciprocal method has many advantages. Dctectmg the events can be done by detectors of fixed image size and location. Within each detector, event counts cm be recovered by simple pattern recognition techniques. The final computation is simple. In effect, the shape constraints under this method come from simple feature detectors. 3.5 Discussion and Implementation Example from The true beauty of the fact that they the jack-knife methods are one-step and robust. comes The jack-knife methods succeed even with difficult textures or orientations. As in the wave texture of figure 4, sometimes the vanishing line direction has no measurable regularity* regularit r-based tilt-searches must fail The jack-k&e metho s x will return a proper vanishing point however, as long as they are not aligned with the vanishing dine. The jack-knife methods even work without search on frontal ((p,q)=(O,O)) textures,. in which eatery direction exhibits ima In this case the jack-knife met rods properly return infinite vanishing points. f: e textural regularity. Consider the synthetic texture in Figure 4 of “waves” and “sand” under a “sunrise”. The two sur aces f’ defined in the image both have surface orientations of kl’t , )=(q,+infinity).” The jack-knife method, in a variant a snhts runs mto two subruns of equal event count, was run on the “wave” quarter of the’ image. Sample detected runs of events and their calculated vanishm points are shown. vertical, and (Detection was in the horizonta , B done sparsely F rincipal diagona.1 directions only, and was or purposes of illustration). The calculated vanishing points somewhat undershoot the desired vanishing Ime,. but do lie on the sa.me row of pixels. Further analysis, of course is necessary to determine how accurate the method can be expected to be (for suitable definitions of “accurate” in this non-linear space). The jack;knife implement; methodsofa,re extraction not diffi$t $; features computations are comparative1 T easy. What is more difficult is the issue of applicabi ity: to which regions of an image should they be applied? Implicitly this is both a segmentation Segmentation problem and a control vproblem. would necessarily involve a textural segmentation literature. of the type well-documented in the Since segmentations customarily return a compressed feature-space description of each se mented region, this description would be he1 lful 3 in % eciding whether the heuristic assumptions behin the methods are Figure 4: Jack-knife rnethods on a synthetic texture. Control is far more difficult. justified !$ere.isollted scenes L well-defined necessary Exce~-$C;: surfaces sophistication is in apportioning belief’ amongst the results of several such methods’ simultaneously calculated results. Inte rating the algorithms’ computed vanishing points--crrorfu and often 7 contradictor --into a single surface synthesis appears to be a major cha lenge. i; Acknowledgements I thank Kcrny Calawa for her Moerdler implemented the raphic skills. hIark a gorithm s rown 7 5 in Figure 4. References L Bajjcsy 761 Rajcs ‘radient as a Dept E R., and Lieberman L. “Texture ‘Cue.” Computer C?rayhics and I~l~ge Processing 5, 5 (March 19r6), 52-67. L Kender 8Oa] Kender J.R. Shnye fro??a Textuf-e. h.D. Thesis, Carnegie-&lellon IJnivcrsity Computer Science Department, Nov. 1980. f Kender 8Qbl Kender J.R., and Kanade, T. Mapping mage Properties into Shape Constrain&: Skewed S d mmetry, Affine-Transformable Patterns, and the rape-from-Texture Paradigm. Proceedings of the First Annual National Conference on Artificial Intelligence, American AssociXion for ArtGficial Intelligence, Aug., 2980, pp. 4-6. L Stevens 791 Stevens K A oral n9ldysis 0 c Tez;ure ;,~~-;gpg$yyp$ h/TIT Artificial In elligence Lab., Feb. 1979. Available’& AI-TR-5 12 190
1983
72
269
MODEL-BASED INTERPRETATION OF RANGE IMAGERY Darwin T. Kuan Robert J. Drazovich Advanced Information 6 Decision Systems 201 San Antonio Circle, Suite 286 Mountain View, CA 94040 ABSTRACT This paper describes a model-based approach to interpreting laser range imagery. It discusses the object modeling, model-driven prediction, and feature-to-model matching aspects of the problem. The model objects are represented by a viewpoint- independent volumetric model based on generalized cylinders. Predictions of 3-D image features and their relations are generated from object models on multiple levels. These predictions give guidance for goal-directed shape extraction from low level image features. Interpretation proceeds by compar- ing the extracted image features with object models in a coarse to fine hierarchy. Since the 3-D information is available from the range image, the actual measurements are used for feature-to-model ma tch ing . A limited prototype system has been developed, preliminary results on prediction and interpretation are shown, and future research directions are discussed. I INTRODUCTION Range images offer signif icant advantages over passive reflectance images because they preserve the 3-D information of the scene viewed from the sensor. While 3-D information can be obtained from 2-D images only with extensive inference (due to the ambiguities introduced by the 2-D projection of the 3-D scene and the additional effects of illumi- nation, surface reflectivity, and geometric shape), they can be easily calculated from 3-D range images. Therefore, range data is becoming an increasingly important source of information for a variety of applications including automatic 3-D target classif ication, autonomous vehicles, robot vision, and automatic inspection. In this paper we discuss 3-D object recognition for vehicle objects in air-to-ground laser range imagery. Much of the past work on range data analysis has emphasized a data-driven approach [51, 171. The ACRONYM system [21 is a powerful model-based vision system capable of doing symbolic reasoning Research sponsored by the Air Force Office of Scientific Research (AFSC), under Contract F49620- 82-C-0071. The United States Government is author- ized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright notation herein. among 3-D models and 2-D images. In this research, we have attempted to extend the principles of the ACRONYM approach to the analysis of 3-D imagery. This paper describes a model-based approach to interpreting laser range imagery. The full system includes 3-D image feature extraction, geometric object modeling, model-driven prediction, and image feature-to-model matching. The overal 1 structure of the system is shown in Figure 1. Further dis- cussion of this system is provided in [4]. The ex- traction of low level 3-D image features from the range data was previously reported in [31. This paper emphasizes the object modeling, model-driven prediction, and image feature-to-model matching as- pects of the system. In the geometric modeling system (see Figure 11, object models are represented by a single, viewpoint-independent representation based on gen- eralized cylinders. Model priority index informa- tion and attachment relations are explicitly speci- fied in the models to facilitate 3-D image feature prediction and extraction. The prediction process predicts physical edge types (occluding, convex, or concave edges), surface properties (planar or curved), cylinder contour, cylinder obscuration (visible, occluded), and invariant shape properties (parallel, collinear, connectivity). These knowledge-based predictions are on multiple levels and are very powerful for directing the feature ex- traction algorithms’ search for particular features in a limited region. The object classification task proceeds from coarse to fine by first compar- ing gross object features (e.g., object length, height, extreme points, etc.) and then finer com- ponent features (e.g., cylinder volume, position, orientation, etc. > extracted from laser imagery with a model using a set of rules that produces a likelihood value to indicate the goodness of match. Since the 3-D information are available from the range image, the actual measurements (e.g., length, width, volume) are used for matching. II MODEL REPRESENTATION The system uses a viewpoint-independent volumetric representation for 3-D object modeling. The volume primitives we use are generalized cylinders [l 1. A generalized cylinder is defined by a space curve, called the axis, and planar cross-section functions defined on the axis. A complex object is represented in terms of a set of individual cy linder s and their spatial relations. The 3-D object model has a hierarchical representa- 210 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Geometric Modeling System Target Classification tion I Extraction I Laser Range Image Figure 1 : Major Components of a 3-D Object Classification System with coarse to fine details that enables suc- cessful refinement of analysis and also provides a prediction generation mechanism at multiple levels. These levels might include the scene, object, com- ponent and sub-component levels. Figure 2 shows the models of a missile launcher and a missile launcher decoy at the component detail level. The components (cylinders) in the same model detail level may vary in importance for recognizing the object. For example, the gun barrel of a tank is unique in vehicle models and provides sufficient evidence to distinguish a tank from a truck or oth- er vehicles. Therefore, to recognize a tank, we may first look for the gun barrel in the image. This kind of knowledge for goal-directed feature extraction is explicitly represented in our object models by using a model priority index. Another aspect of the model priority index is determined by the geometric properties of each cylinder. For ex- ample, elongated cylinders and large cylinders show distinct cylinder properties that are easy to dis- tinguish from other cylinders. These distinguished pieces can be used for fast model access and selec- tion. The model priority index is viewpoint- independent and provides a mechanism for efficient Component Level Model of Missile Launcher Platform Component Level Model of Missile Launcher Decoy Figure 2: 3-D Models of a Missile Launcher and a Decoy model access and selection. However, this index does not give us any information about the visibil- ity of the cylinder or the ease of cylinder extrac- tion. Shapes that are occluded are generally more difficult to extract from images. This knowledge can be used to direct the feature extraction a lgo- rithm to look for reliable and easy to obtain features first. This kind of planning information is encoded in an obscuration priority index that indicates the occlusion and visibility relations among cy linder s . The obscuration priority index is similar to the priority algorithm [6l used for hid- den surface elimination. The idea is to arrange all cylinders in the scene in priority order based on their depth and obscuration relations. Cylinders closer to the viewpoint and not obscured bY other cylinders will have higher priority in- dices. Cylinders that are totally obscured are ex- plicit ly indicated, and no effort will be spent trying to find them in the range image. This ob- scuration priority index is purely geometrical and determined from the object orientation and viewpoint. The combination of model priority index and obscuration priority index gives a new priority 211 order that not only indicates the importance of a particular cylinder for object recognition, but also compares the ease of cylinder extraction in the range image. Due to the performance limitation on occluded cylinder extraction, we currently use the obscuration priority index as the base index for cylinder extraction and matching. If two cylinders have the same obscuration priorities (they don’t obscure each other), then the model priority indices are compared to determine the ord- er of feature extraction sequence. In general, the priority index is valid for an interval of the viewing angle and thus can be used based on rough object orientation estimates. Physical edges such as occluding, convex and concave edges can be distinguished in the range im- age. The prediction of the physical edge types of a cylinder contour strongly constrain the possible interpretations of each edge segment. In order to use information, our models should have the capa- bility to predict the types of physical edges. To do this, the attachment relations between cylinders in the object are explicitly specified. The specification of these attachment relations facili- tates the prediction of physical edge type and will be discussed in more detail in the edge level pred- iction section below. III MODEL Prediction is the process of making estimates about the image appearance of objects using model knowledge and given some information about the ob- jects’ relative position, orientation, and shape in the image. Predictions first give guidance to im- age feature extraction processes for goal-directed shape extraction; then they provide mechanisms for feature-to-model matching and interpretation. The best features for prediction are those invariant features that will always be observable in the im- age independent of the object’s orientation and sensor position. Examples of these invariant features in range imagery are physical edge type (i.e., occluding, convex , concave), surf ace type (planar, curved), cylinder contour edge We (oc- cluding or concave), collinear and parallel rela- tions and connectivity. The prediction process proceeds hierarchically in a top-down fashion. Global object features and spatial relations among object components viewed from the sensor are first predicted. These include the object side-view characteristics, and the ob- scuration priority index. From the obscuration and model priority indices, the system chooses a cy linder with the highest priority. The occluding contour of this cylinder is identified and the phy- sical edge types of the cylinder contour boundaries are predicted. At the cylinder contour level, there are only two types of edges, occluding and concave. The convex edge only occurs as the inter- nal edge of a cylinder, hence it is irrelevant in the cylinder-level feature prediction and extrac- tion. Once a cy linder is extracted, its volume properties and relative position and orientation to the object coordinate system are used for interpre- tation. If this information is not sufficient for target classification, the cylinder contour of the next highest priority cylinder is predicted. The same procedure continues until the object recogni- tion task is achieved or all the object components have been examined. Details of these predictions developed for different levels are discussed in the following paragraphs. A. Object-Level Prediction The object level predictions provide global object features and spatial relations among object components viewed from the sensor. Examples are the dimensions and side-view characteristics of the object in the image, the spatial relationships between object components and the occluding rela- tions among object components. The structural re- lationships (object-level prediction) can be used for global structure matching after we extract and analyze the individual cylinders of an object or it can be used similar to a junction dictionary [Sl to guide the search of cylinder features. For objects with self-occlusion, the second approach may be more appropriate because structural knowledge is actively used for both occluded cylinder feature extraction and interpretation. B. Cylinder-Level Prediction Cylinder- level predictions provide goal- directed guidance for cylinder extraction from low level image features. This is the most important prediction level in the system because cylinders are the basic symbolic primitives used to perform image feature-to-model matching. A generalized cylinder is hierarchically characterized by first defining its axis and then the cross-sections along the axis. For totally visible cylinders (cylinders with a high obscura- tion priority index), cylinder level prediction is accomplished by using the hierarchy in defining generalized cylinders. The properties of the two major cylinder boundaries along the cylinder axis are first predicted. These predictions include parallel relations (relative angles in general), physical edge types (concave, occluding), length, distance between two segments, and the extent of overlap. These predictions are sufficient to guide the coarse extraction of cylinders. After extract- ing the two major cylinder boundaries along the axis, other boundaries on the cylinder contour can be predicted in limited regions relative to the two major boundaries. Heuristic rules utilizing con- vexity, neighboring relations, the goal distance, and relative position information are used to find the complete cylinder contour from incomplete edge segments. A complete example will be given in the processing example section. Due to the internal structure of the vehicle objects, most cylinders are partially obscured. The prediction of occluded cylinder contours in the image can be generated by polygon clipping algo- rithms [91 according to the obscuration priority index. Again, the same hierarchical cylinder ex- traction process can be used. Additional relations on the two major boundaries such as missing seg- ments due to occlusion and co1 linear relations between disjoint segments can also be identified. In addition, the relative structural relations of the extracted cylinders (with higher obscuration 212 index than the current cylinder) constrain the search region. The common boundaries between the current cylinder and the previously extracted cylinders can be used as landmarks for registra- tion. We have not implemented this partially oc- cluded cylinder extraction a lgor i thm and it re- quires further research efforts. C. Surface-Level Prediction Surface-level predictions describe the cylinder surface appearances and their spatial re- lations in the image. The convex edges are useful at this level for grouping edges into the boun- daries of a surface patch. These surface primi- tives can be grouped together to form a cylinder according to surface-level predictions from the model. Due to the low resolution nature of air- to-ground laser imagery, surface properties are not easy to extract and to use (sometimes a single sur- face patch only has a few points). However, for industrial applications where high resolution range images are available, surface-level predictions im- pose strong constraints on cylinder extraction. D. Edge Level Prediction Edge-level predictions assign physical edge types to each edge segment and thus strongly con- strain the possible interpretation of each edge segment for cylinder contour extraction. Prediction of the physical edge type of a cylinder contour is made possible by explicitly specifying the attachment relations between cylinders in the model. For examp le, if cylinder A is supported by cylinder B, the two touching faces of the cylinders are explicitly labeled in the ob- ject model. The occluding edge type is predicted for those cylinder contour segments that do not be- long to a labeled face. The concave edge type is predicted for those segments that belong to a la- beled face, and are inside the other surfaces with the same label. The convex edges correspond to the internal edges of cylinders, and thus are not use- ful for cylinder extraction. The physical edge type limits the search space for cylinder grouping, but more importantly, it can be used to verify the correctness of the extracted cylinder. INTERPRETATION Interpretation proceeds by comparing the image features on multiple levels to the object models according to a set of if-then rules. Each rule for comparison produces a “goodness” measure of the system’s confidence in how well the two features match. If a single object model has a much larger likelihood than others, the target in the range im- age is classified as an instance of that object. Besides the classif ication of the object, object position and orientation information are also available and can be used for higher level scene interpretation. The set of rules for interpreting image features in terms of models can be divided into two classes according to the level of detail they com- pare. The first class of rules looks for general features and global characteristics; i.e., the object-level features such as the object length, width, and height. Since the 3-D surface data can be obtained from the range image through a coordi- nate transformation, the actual length measurements are available and can be compared directly with the true model parameters. A typical rule for object length matching is shown in the first rule of Fig- ure 3. The rule assigns a negative likelihood to models that exceed the tolerance interval and prunes these objects from further consideration. For those object models within the tolerance inter- val, the rule returns a likelihood value as the goodness measure. Another set of general features is the extreme positions of the image object. For example, the lowest components of a truck are its wheels. The second rule in Figure 3 is one exam- ple. Other extreme positions such as the heights of the front and rear points of the side-view pro- jection image can also be used for comparison. ‘Note that these rules are domain-independent and can be derived from the object models. 1. Object Length if - ILENGTH( then -.5 else EATURE) - LENGTH(MODEL)I > DELTA LENGTH(FEATURE) - LENGTH(MODEL)I LENGTH(MODEL) 2. Minimum Height Point Location if #POINTS(MIN.REAR.HEIGHT(MODEL)) > 0 - #POINTS(MIN.REAR.HEIGHT(FEATURE)) > 0 and then -5 else .-5 3. Relative Cylinder Orientation if - IRELATIVE ORIENTATION(FEATURE) - RELATIVE-;RIENTATION(MODEL)I > THETA then -0.5 else 0.3 Figure 3: Example Interpretation Rules The second class of rules compares finer ob- ject details at the cylinder level. The system first tries to extract a single cylinder from the range image by some heuristic rules and predictions of invariants (e.g., antiparallels, angles). Once a cylinder is extracted, (length, width, length, and its 3-D properties volume) and relative position and orientation in the object coordinate system are compared with the model. These cylinder-level features not only provide finer de- tail for feature-to-model matching, but also put strong constraints on the internal structure of the object. These constraints are often sufficient to make a unique interpretation of the image object. V PROCESSING EXAMPLE To assess the feasibility and capability of rule-based interpretation for classifying vehicle targets from extracted 3-D features, a sample set of rules was developed and tested on extracted 3-D image feature information obtained by using the techniques described in [31. A synthetic laser range image of a missile launcher decoy is shown in Figure 4. This image has size 64x64 and is provid- ed by The Analytic Science Corporation. The sensor viewing angle is known, but nothing is assumed about the decoy’s orientation. The approach used for 3-D feature extraction is to first transform the range image (in a sensor-centered coordinate system) to the surface data (in a world coordinate system) from knowledge of the sensor position. The object is then separated from the background by an object-ground segmentation algorithm. Once the ob- ject segment is extracted from the image, the ground projection and side-view projection images of the object segment are generated. These projec- tion images are useful for extracting gross object features and major object structures. The object orientation can be estimated from the orientation of the ground projection image, since vehicle tar- gets are usually elongated. The side-view projec- tion image can be used to locate major object structure positions such as wheel and missile posi- tions of a missile launcher. After extracting those global features, a 3-D edge detection algo- rithm is used to extract physical edge segments for fine feature-to-model matching. Our 3-D edge feature extraction algorithm directly calculates the physical angle of the object surface from sur- face data. Convex and concave edges can be dis- tinguished according to the value of the physical edge angle. This physical edge angle image is not only useful for physical edge detection, but also provides relative surface orientation information for extracting planar and curved surfaces. The ex- tracted occluding edge segments (in solid line) and concave edge segments (in dashed line) are shown in Figure 5. Figure 4: Synthet ic Range Imagery Missile Launcher Decoy Figure 5: Line Segments of Occluding and Concave Edges of Figure 4 The rules are then applied to compare this ex- tracted feature information with two vehicle models. These were a missile launcher and a decay. They are chosen because their similarities in size and structure make the selection and correct clas- sif ication a non-trivial task. The results are shown in Figure 6. The first three rules check the object’s length measurements and the system prefers the decoy slightly. No classification can be made at this stage. The second set of rules checks the general features of each model and tries to resolve the object’s front and rear ambiguity since we are not sure which end of the image feature is the vehicle front. Reasonable classification is achieved at this level. These seemingly simple rules are able to classify similar objects as a result of rich 3-D information provided by the range data. RULE MISSILE LAUNCHER DECOY OBJECT LENGTH .443 ,489 OBJECT WIDTH .418 ,499 OBJECT HEIGHT .400 .489 SUBTOTAL 1.261 1.477 MINIMUM HEIGHT LOCATION OBJECT FRONT EXTREME HEIGHT OBJECT REAR EXTREME HEIGHT TOTAL FRONT REAR FRONT REAR .5 -. 5 -. 5 -. 5 -. 5 -. 5 .711 -.239 .5 -.5 .48 .48 .46 .46 2.917 1.917 Figure 6 : Likelihood Weights Associating Rules and Object Models 214 If the gross object features and major struc- tures do not provide sufficient evidence for clas- sification, the system tries to extract finer de- tails at the cylinder level based on model predic- tions. The cylinder extraction algorithm finds the major cylinder boundaries along the cylinder axis by using the prediction that these two segments will appear parallel in the image and that one of them is a concave edge. This prediction constrains the possible edge segments for the major cylinder boundaries. The cylinder extraction algorithm suc- cessfully finds that two edge segments (with labels A and B in Figure 5) satisfy the cylinder predic- tion and that they have a significant amount of overlap between them. Using these two edge seg- ments as two sides, the cylinder extraction algo- rithm tries to find the complete cylinder contour. It first determines the direction of edge segment B (a concave edge can have two edge directions) from the direction of edge segment A (occluding edge) and decides that B should be used as an occluded edge for the cylinder we consider. Then it labels one end point of segment A as the starting point and one corresponding end point of segment B as the goal point (in the correct case, the goal point is closer to the end point of segment B). The algo- rithm starts from the starting point and searches for nearby edge segments with the correct direc- tion. If one nearby edge segment has the correct direction and is of sufficient length, heuristic rules are used to decide whether to include the segment in the cylinder contour. Examples of these heuristics are: 1) the convexity rule for checking the convexity of the cylinder contour angle and 2) the goal distance rule for avoiding the new start- ing point being too far away from the goal point ( compared to the distance between the previous starting and goal points. The algorithm proceeds until the new starting point is the goal point. Then the algorithm defines the other end point of segment A as the starting point and the other end point of segment B as the goal point and starts over again. This cylinder extraction algorithm successfully finds a complete cylinder contour from the edge segments. The length, width, and height of this cylinder are then extracted by the same techniques used for gross object feature extraction on the object level. These features strongly res- trict the possible object models for matching. Fu- tur e research efforts are required to extend this hierarchical cylinder extraction algorithm to deal with partially occluded cylinders as discussed in the cylinder-level prediction section. VI CONCLUSIONS A limited prototype system for range image in- terpretation has been developed. Important issues on 3-D image feature extraction, model representa- tion, prediction, and interpretation are discussed. Directions for future research on occluded cylinder extraction and prediction are proposed. Because the approach is domain-independent and somewhat in- dependent of the types of range sensor used, it can be applied directly to other applications such as robot vision and autonomous vehicles. REFERENCES [II Agin, G.J. and T.O. Binford, “Computer Description of Curved Objects.” IEEE Trans. Computer, C-25, (1976) 439-449. [21 Brooks, R.A., “Symbolic Reasoning Among 3-D Models and 2-D Images.” Artificial Intelligence, Vol. 17, (198l> 285-348. [31 Kuan, D.T., “Thr ee-dimensiona 1 Feature Extraction, ” IEEE Computer Vision and Pattern Recognition Conference, Arlington, VA, June 1983. [41 Kuan, D.T. and R.J. Drazovich, “Intelligent Interpretation of 3-D Imagery,” AI6DS Tech. Report 1027-1, Mountain View, CA, February 19 83. [51 Nevatia, R. and T.O. Binford, “Description and Recognition of Curved Objects,” Artificial Intellipence, Vol. 8, pp. 77-9 8, February 1977. [61 Newman, W.M. and R.F. Sproull, Principles of Interactive Computer Graphics. New York: - McGraw-Hill, 1973. [7] Oshima, M. and Y. Shirai, “A Scene Description Method Using Three-Dimensional Information, ” Pattern Recognition, Vol. 11, pp. 9-17, 1979. 181 Sugihara, K., “Range Data Analysis Guided by a Junction Dictionary,” Artificial Intelligence, Vol. 12, pp. 41-69, 1979. [91 Weiler, K., and P. Atherton, “Hidden Surface Removal Using Polygon Area Sorting,” Computer Graphics, Vol-11, pp. 214, Summer 197 7.
1983
73
270
An Iterative Method for Reconstructin Convex Polyhedra from Extended Gaussian f mages James J. I, ittle* De artment of Corn up uter Science niversity of Britis R Columbia ABSTRACT In computing a scene description from an image, a useful intermediate representation of a scene object is given by the orientation and area of the constituent surface facets, termed the Extended Gaussian Image (EGI) of the object. The EGI of a convex object uniquely represents that object. We are con- cerned with the computational task of reconstructing the shape of scene objects from their Extended Gaussian Images, where the objects are restricted to convex polyhedra. We present an iterative method for reconstructing convex polyhe- dra from their Extended Gaussian Images. I INTRODUCTION The representation of an object by the orientation of its surface arises in many computer vision problems. Specifically, needle maps[Horn,1982], the “23 sketch” [Marr,1976], and intrinsic images[Barrow and Tenenbaum,l978] all represent the orientation of an object at the points of the image. Orientation is specified as a vector pointing in the direction of the surface normal. Orientation maps can form the output of stereo pro- cessing from several images (Grimson,l981, Baker and Binford, 19811, photometric stereo [Woodham,l980], or any of the so- called “shape from” methods, such as shape from shading (Horn,1975, Ikeuchi and Horn,1981], shape from contour [Marr,1977], h p f s a e rom texture [Kender,1979, Witkin,l981], and shape from edge interpretation [Mackworth,l973, Kanade,l981, Sugihara,l982]. By translating the surface nor- mals of an object to a common point of application, a representation of the distribution of surface orientation is formed, called the Extended Gaussian Image (EGI). Ikeuchi [Ikeuchi,l981] discussed the use of the EGI for recognizing objects an industrial environment. For each unk- nown object, the EGI of its visible hemisphere is formed by a propagation of constraints method [Ikeuchi and Horn,1981]. The EGI is then compared to the EGIs of objects stored in a library. The best-matched prototype identifies the object. Since it can be shown that the EGI does not uniquely identify a concave object, the EGI representation applies only to convex objects. In this discussion we will consider only con- vex polyhedra, which are formed by the intersection of a finite number of half-spaces. A bounded convex polyhedron will be termed a polytope. The EGI of a polytope P can be inter- preted as a set of vectors, one for each face in the polytope. * This research was supported in part by a UBC University Graduate Fel- lowship and an NSERC Postgraduate Scholarship. The length of each vector is the area of the corresponding face in the polytope. Minkowski [Minkowski,l897] showed that the EGI of a convex object uniquely specifies the object up to a translation. Further, he proved that any set of vectors whose sum is zero represents the EGI of a convex object. A natural question then arises: can one describe an algo- rithm for reconstructing a convex polytope from its EGI? Minkowski’s proof of existence and uniqueness is not strictly constructive; it only provides an indirect route to the solution. Ikeuchi proposed an algorithm for generating the polytope corresponding to a given EGI with n faces, as follows. The solution is found by determining L=(I,,l,..l,), the n-vector of distances of the faces of the polytope from the origin. The vector L, together with the orientations of the faces, defines the locations in three-space of the half-spaces forming the polytope P(L). The areas of the faces of P(L), its volume and its centre of gravity can be computed. In the following discus- sion, we will consider that any polytope will be translated so that its centre of gravity coincides with the origin. In Ikeuchi’s algorithm for solving the reconstruction prob- lem, the process is subdivided into n distinct cases; in the it* case, face i is the farthest from the origin. When face i is chosen as maximum, I, is set to 1.0; all other 1, vary between 0.0 and 1.0. The n-l dimensional space of distances is quan- tized (at spacing d) . Each of the d’-” locations in this space specifies the locations of the n faces in three space. The polytope can be constructed, and the areas of its faces deter- mined. These areas are scaled and compared with the objec- tive. No analysis of the accuracy of the algorithm is supplied. Ikeuchi’s method minimizes the sum of the square differences between the calculated areas of the polytope and the given areas in the EGI. It is not clear that the polytope which results from this minimization (after normalizing) will have the same structure as the desired polytope. In addition, the method is very expensive. To double the resolution of the algorithm, one must increase the number of evaluation points exponentially. II CONSTRWCTIVE METHODS To find a constructive solution, first consider the two- dimensional case. The EGI of a polygon is a system of vectors emanating from the origin. If the system sums to zero, then it represents a convex polygon. Figure 1 shows a two- dimensional EGI; the reconstructed polygon is rotated by -;. 247 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Figure 1 The EGI of a Convex Polygon and Its Reconstruction To construct the polygon, given the system of vectors, one proceeds as follows: Assume the vectors { v, } are given from 1 to n in anti- clockwise order. Take vl, rotate it by a and place its tail at some point in the plane. For the remaining vectors, in order, rotate v, by $ and place its tail at the head of v,-~. Because the system sums to zero, the head of v, will close with the tail of vl. By definition, the length of each vector is the length of the corresponding edge in the polygon, and its orientation is normal to that of the edge. Hence each edge in the reconstructed polygon will be the correct length and at the proper orientation. The two-dimensional method does not directlv extend to higher dimensions. In two dimensions, the adjaceicies among the facial elements, the edges, is clear from the EGI. In three dimensions the adjacency relationships are not given by the EGI and must form part of the solution. In that case, how can the solution be formulated? A result of !Tutte.1962! states that the number of different adjacency ‘relations for polytopes with n triangular faces is asymptotically exponential in n. The number of gen- era! polytopes (with faces having any number of sides) is larger. Hence any method which examines al! possible adja- cency relations will take exponential time. III MINKOWSKI’S PROOF Minkowski’s proof provides clues for finding a reconstruc- tion method. The original proof considers polytopes in any dimension d; we will describe the proof in 3-space for clarity. For a polytope P in R’, the following set of vectors is formed: u(P) = { u, ] l<i< n } where each u, is a non-zero vector emanating from the origin parallel to the outward normal of face i of P. The length of each u, is the area of face i, A,. This set of vectors corresponds to the EGI given above. A set of vectors U is equilibriated if and only if they sum to zero and no two vectors are positively proportional, i.e., no two are linear multiples of a common unit vector. An equilibriated set of vectors U is fully equilibriated if and only if it spans RS. Minkowski’s poly-tope reconstruction theorem shows that 1) if P is a polytope in RS not contained in any plane then the U(P) is fully equilibriated and 2) if U is a fully equilibriated system of vectors, then there exists a polytope P unique within a translation such that U is the EGI of P. Let L be the n-vector of distances from the origin of the faces of the polytope P(L). In the proof of condition (2), Min- kowski shows that L minimizes f(L)= CA, I, (1) where A, is the area of face i given by the EGI and 1, is the distance of face i from the origin, subject to the constraint that the volume of P(L), V(L), is greater than or equal to one. By the Brunn-Minkowski theorem [Grunbaum,l967], the sub- set of R” given by {LI V(L)>l} is convex. Convexity of the constraint set implies that the minimum of the objective func- tion f(L), since it is linear, will lie on the boundary of the con- vex set, where V(L)=l, and that a local minimum of f(L) is the global minimum. Reconstructing a polytope from its EGI can be accomplished by solving a suitably formulated con- strained minimization problem. I-V THE ITERATIVE METHOD A. Constructing P(L) To construct a polytope P(L), we form the intersection of the n half-spaces specified by the vector L. Brown [1978] describes a method for transforming the problem of intersect- ing n half-spaces into a convex hull problem. Brown uses the dual transform, described in the vision literature by [Huffman,1971, Mackworth,l973, Draper,l981]. The dual transform takes a plane with equation AZ+ By+ Cz+ l=O (2) into the point (A,B,C) in RS ( see figure 2). The planes of P do not pass through the origin so equation (2) is defined for a!! faces. The n planes forming P correspond to n points in R3, for which the algorithm of Preparata and Hong [1978] deter- mines the convex hull in O(n!ogn) time. Any face of the con- vex hull of the dual points corresponds to a vertex of P. Any two points incident on an edge in dual (P) correspond to a pair of faces of P which share an edge. In sum, the adjacency information in the dual provides the adjacency information for P. Hence we can construct the vertices and edges of P. The centroid of P must coincide with the origin so its centre of gravity must be computed; each 1, is augmented by the scalar product of the centre of gravity, a point in R’, and the normal vector of face i. F 4 3 E 6 5 . B 1 . . . A -. D C 2 This description is taken from [Grunbaum,196i’,p.332]. l=ACB 2 = AEDC 3 = DEF 4 = ABFE 5 = BCDF Figure 2 A polytope and its dual 248 B. Restoring Feasibility Once P(L) has been constructed, it is straightforward to determine a corresponding point L’ which is feasible. The volume V(L) of a 3-d polytope P(L) is a homogeneous polyno- mial in L of degree 3. The formula for the gradient of V(L) can be derived from this polynomial. The gradient is used in computing the minimizing step. From a given !(L) we can compute the volume V(L) and scale L by V(L)‘, yielding a polytope P(L’) with unit volume. C. Determining a Minimizing Step Constrained optimization is a well-studied problem, so many methods are available for determining the step direction and magnitude [Gil! et a!., 19811. The reduced gradient method is a simple method which was chosen for implementa- tion. By taking a step in R” in the hyperplane perpendicular to G(L), we will remain close to the constraint surface V(L)=l. The step is in the direction which minimizes j(L), that is, in the direction of the projection of the vector A, the n-vector of areas of the faces given by the EGI, onto the hyperplane perpendicular to G(L). This step is a multiple of: <A,G(L)>G(L) - A , where <x,y> is the inner product (3) D. The Method The iterative method for reconstructing a convex polyhedron from its EGI combines the procedures described above. The procedure is formulated as follows: 1) Set L to (l,l,...l). 2) Construct P(L): 1) Transform the n planes given by L into M, a set of n points in R3, using the dual transform. 2) Compute the convex hull of M, call it CH(M). 3) Determine the adjacency relations of P(L) from CH(M). Calculate the locations of the ver- tices of P(L). 3) Compute the centroid of P(L). Translate the cen- troid of P(L) to the origin. Compute y(L) and the gradient of V, G(L). Scale L by V(L )” to make its volume unity. 4) Evaluate f(L); if the decrease in f is less than a pre-specified value, terminate. Otherwise, compute a step using equation (3), update L, and repeat, start- ing at step 2. V PERFORMANCE An example polytope has been reconstructed from its EGI (figure 3). The polytope to which the EGI corresponds is shown in figure 4. Figure 3 Stereo View of the EGI of a Distorted Octahedron Figure 4 Stereo View of the Original Polytope The faces of the polytope are parallel to those of a regular octahedron, while the distances of the faces from the origin have been altered. The polytope constructed initially is shown (in stereo) in figure 5. Figure 5 Stereo View of Initial Polytope The initial polytope is an octahedron, in which each face is adjacent to three others. In the course of the minimization, intermediate polytopes exhibit changing adjacency structures. The adjacency structure at an early stage becomes identical to that of the target polytope. The final reconstructed polytope is shown in figure 6; the value of L for this polytope is : (0.336,0.699,0.519,1.137,1.222,0.517,0.460,0.443) and its adjacency structure is: FACE : ADJACENT TO FACES 1 :234856 2 : 163 3 : 126784 4 :138 5 : 1 8 6 6 :158732 7 : 368 8 : 143765 Figure 6 Stereo View of the Reconstructed Polytope 249 The reconstructed polytope has the same adjacency structure as the original polytope. An advantage of this minimization formulation is its indifference to the adjacency relations in the polytope. A correct adjacency structure is guaranteed by Minkowski’s original argument. The iterative reconstruction method terminated on the four- teenth step, when the value of the objective function f(L) had decreased by less than 0.002% on successive steps. The dis- tances of the planes vary on average less than 0.9% from the original; the maximum difference is 4.2%. The requirements of the reconstruction procedure can be factored into two components: the number of iterations required to find an acceptable solution and the number of operations per iteration. Each iteration requires O(n Ign) operations to compute the convex hull of the n dual points. In addition, O(n) operations are necessary to evaluate the volume. Each iteration thus requires O(n lgn) computations. The number of iterations depends on the constrained minimi- zation method used. The convergence rate of an iterative method is said to be linear if the error at step i, E, , satisfies the following formula: (4) ‘--cc 16, I’ \ I where 7<1 and r=l. A reduced gradient method [Gill et al., 19811 was implemented; its convergence rate is linear. When the exponent r in equation (4) is 2, the convergence rate is said to be quadratic. To achieve quadratic convergence, the Hes- sian matrix of V(L) or an approximation to the Hessian must be used, which requires O(n’) operations. Thus reducing the number of steps by improving the convergence rate requires expending more resources per step. ACKNOWLEDGMENTS Thanks go to William Firey for helpful suggestions, to David Kirkpatrick and Alan Mackworth for discussions, and to Robert Woodham for his valuable advice. REFERENCES 1. H.H. Baker and T.O. Binford, “Depth From Edge and Intensity Based Stereo,” Proc. Seventh International Joint Conference on Artifiial Intelligence, pp. 031-636 (1981). 2. H.G. Barrow and J.M. Tenenbaum, “Recovering Intrinsic Scene Characteristics From Images,” pp. 3-26 in Com- puter Vision Systems, ed. E.M. Riseman,Academic Press, New York (1978). 3. K. Q. Brown, “Fast Intersection of Half Spaces,” CMU Technical Report CMU-CS-78-129 (1978). 4. S.W. Draper, “The use of gradient and dual space in line-drawing interpretation,” Artificial Intelligence 17 pp. 461-508 (1981). 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. P.E. Gill, W. Murray, and M.H. Wright, Practical Optim- ization, Academic Press, New York, New York (1981). W.E.L. Grimson, From Images to Surfacea: A Computa- tional Study of the Human Early Visual System, MIT Press, Cambridge, Mass (1981). Branko Grunbaum, Convez Polytopea, John Wiley and Sons, Ltd. , London and New York (1967 ). B.K.P. Horn, “Obtaining Shape From Shading Informa- tion,” pp. 115-155 in The Psychology of Computer Vision, ed. P.H. Winston,McGraw-Hill, New York (1975). B.K.P. Horn, “Sequins and Quills - a representation for surface topography ,” in Representation of S-dimenaional Objecta, ed. R. Bajcsy,Springer-Verlag, Berlin and New York (1982). D.A. Huffman, “A duality concept for the analysis of polyhedral scenes,” in Machine Intelligence ed. B. Meltzer and D. Michie,Edinburgh Univ. Presi, Edin- burgh, U.K. (1971). K.I. Ikeuchi, “Recognition of 3-D Objects Using the Extended Gaussian Image,” Proceedinga of the Seventh IJCAI, pp. 595-600 (1981). K.I. Ikeuchi, and B.K.P. Horn, “Numerical Shape from Shading and Occluding Boundaries,” Artificial Intelligence 17( 1981). T. Kanade, “Recovery of the Three Dimensional Shape of an Object from a Single View,” Artificial Intelligence 17 pp. 409-461 (1981). J.R. Kender, “Shape From Texture : an Aggregation Transform That Maps a Class of Textures Into Surface Orientation,” Proceeding of the Sizth International Joint Conference on Artifkial Intelligence, pp. 475-480 (1979). A.K. Mackworth, “Interpreting Pictures of Polyhedral Scenes,” Artificial Intelligence 4(2) pp. 121-137 (1973). D. Marr, “Early Processing of Visual Information,” Phil. Trana. Royal Society of London 27SB(942) pp. 483-524 (1976). David Marr, “Analysis of Occluding Contour ,” Proc. Royal Sot. London B(197) pp. 441-475 (1977). Herman Minkowski, “Allgemeine Lehrsatze uber die kon- vexe Polyeder ,” pp. 198-219 in Nachr. Ges. Wisa. Got- tingen, (1897). F.P. Preparata and S.J. Hong , “Convex Hulls of Finite Sets of Points in Two and Three Dimensions,” CACM 20 pp. 87-93 (1977). K. Sugihara, “Mathematical Structures of Line Drawings of Polyhedrons - Toward Man-Machine Commmunication by Means of Line Drawings,” Pattern Analysis and Machine Intelligence 4 pp. 458-468 (1982). W.T. Tutte, “A Census of Planar !I’riangulations,” Cana- dian Journal of Math. IL4 pp. 21-38 (1962). A.P. Witkin, “Recovering Surface Shape and Orientation from Texture,” (1981). Artificial Intelligence 17 pp. 17-47 R.J. Woodham, “Photometric Method for Determining Surface Orientation from Multiple Images,” Optical Engineering 19 pp. 139144 (1980). 250
1983
74
271
PERCEPTUAL ORGANIZATION AS A BASIS FOR VISUAL RECOGNITION David G. Lowe and Thomas 0. Rinford Computer Science Department Stanford TJniversity, Stanford, California 94305 Abstract Evidence is prescnled showiug that bottom-up grouping of im&ge features is usually prerequisite to the recognition and in terprc tation of images. WC describe three functions of Ihcse groupings: 1) scgmcnlation, 2) three-dimcnsiomrl in- tcrpreta,tion, and 3) stable descriptions for accessjug object models. Scvernl principlcc J are hypothesized for dctermin- ing which image relations should be formed: relations are significant to the extent that they are unlikely to have arisen by accident from the surrounding distribution of features, relations can only be formed where there are few altcrna- tives within tilt same proximity, and relations must be based on properties which are invariant over a range of imaging conditions. Using these principles we develop an algorithm for curve segmentation which detects significant structure at multiple rcsofutions, including the linking of segments on the basis of curvilinearity. The algorithm is a.ble to detect structures \vhich no single-resolution algorithm could detect. Its j>crformance is demonstrated OR synthetic and natural image data. Introduction A major goal of computer vision research is to relate visual images to prior knowledge of their constituents, and thereby label and interpret them. However, current model- based vision systems have bceu demonstrated only in tightly- constrained cnvironmcnts with a few well-specified models to compare to the image [Z, 8, It]. The difficulty in ex- panding pcrTormnnce to more general domains is not one of ambiguity--it is very unlikely 1haL two different models will fully fit the same image data. Rather, the problem is one of searching for potential correspondences between models and the image, since increasing the number and gcncrality of the models results in an excessively large space of pos- sible mntchcs. Continued research into recovering three- dirncnsiorlal shnpc from imagesP-usiilg stcrco, motion, shnd- ing, md tcxlurc--promises to reduce lhc size of this search space considerably. However, the problem of matching is far from solved even when given full three-dimensional in- formation, and these methods fail to explain the cxccllent level of human performance in such simple domains as line drawings. In order to interpret images about which wc have little prior knowledge, it is necessary to use effect,ivcJ boltom-up techniques to structure and describe the image in a form that can be used to selectively index into n large body of world knowledge. In this paper we will describe methods for detecting and cva,luating the significance of relations be- tween image elements in a way that cm be applied uniformly to all images before we have any knowledge of their con- tents. Previous research on this and related topics has gone under such names as image segmentalion, perceptual or- ganization, figure/ground phenomena, texture description, and Gestalt perception. There have been many efforts to develop algorithms for specific scgmcntstion problcrns: such as the detection of collinearity or connectivity, but these have not been integrated and have often lackctl general ;~p- plicability. Marr’s inilial primal slcctch formulation [7] was intended to make some of these relations explicit, but this aspect of it was never fully developed. Recenlly, Wit,kin and Tenenbaum [12] h ave argued for the importance of detecting regularities and imposirlg structure on the irn;lgc for many of the same reasons given here. They describe a unified trcat- ment of inference b>:;cd on the assumption that regularities detected in the image are non-accidental. In this paper WC will describe Lhe role that this form of inference plays in model based recognition, develop some nnd~rlying prin- ciples for this level of interprct>.t,ioll, and prcscnt new seg- mentation methods b;v;ed up”n lhcsc principles. There are three valuable scurcps of irlformation which the bottom-up orgailixxtion of image features can provide, all of which simplify the problem of matching against world knowledge: 1) A major reduction in the search space is achieved by segmentation -the division of the image inlo sets of related features. This hzs long been recognized as a crucial problem in image interpretation. we do IlGt want to match models against all possible cornhinations of features in an image, so good scgmrntat ion is crucial for reducing the combinatorics of this search. 2) Two-dimcusional rrlations lead to specific three-dimcn- sional inter_:\rrtations, as we have described in previous pnpcrs [I, 51. For c:;amp!c, collinear lines in tilt image mu::t he co!lirlear in 3-sp.7cc, barring an accident of viewpoint. A corollary of ibis is thal thesr image rcla- 255 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. tions are normally invariant with respect to viewpoint, which greatly simplilies the problem of malching to three-dimensional objects of unknown orientation. 3) To the extent that these relations are stable under different imaging conditions and viewpoints, they can be used as reliable index terms to access a body of world knowledge. Not only can the names of the relations be used, but in addition each relation will have several parameters of variation whose relative values in the image can be used. For example, collinear line segments can be characterized by the relative sizes of the seg- ments and gaps, which provides a viewpoint-invariant description that can be used to select a model for at- tempted matching. Note that all three of these points assume that the relations found in the image are a result of regularities in the ob- jects being viewed. This means that any relations which happen to arise accidentally from independent features will only confuse the interpretations. This distinction between significant and accidental relations is a point to which we will return. The importance of perceptual organization for recognition: A demonstration The importance of these grouping operations as a stage in the processing of images by the human visual system can be demonstrated by a straightforward psychophysical ex- periment. In Figure l(a) we have constructed a partial line drawing of a bicycle in such a way that most opportunities for bottom-up segmentation are eliminated (e.g., we have eliminated most cases of signilicant collinearity, endpoint proximity, parallelism, and symmetry). In informal experi- ments with 10 subjects who were told nothing about the identity of the object, this drawing proved to be rcmark- ably difficult to recognize. Nine out of 10 subjects were un- able to recognize the object within a 60 second time limit, and the tenth subject took 45 seconds. Note that this is in spite of the fact that the object level segmentation has al- ready been performed---the task would be even harder if the bicycle were embedded in a normal scene containing ma,ny surrounding features. Figure I(b) is the same drawing as in l(a) with only a single segment added. The added segment was placed in a strategic location which would allow it to be combined with other segments in a curvilinear grouping. The center of this circular grouping would then be coincident with the termination of another segment, leading to further group- ings. As might be expected if we assume that bottom-up groupings play an important role in recognition, the recog- nition times for this second figure were dramatically lower than for the first, with 3 out of 10 subjects recognizing it within 5 seconds and with 7 out of 10 subjects recognizing it within the 60 second time limit. Presumably, if the added segment had been placed at some location which did not lend itself to perceptual groupings the change in recognition times would have been negligible. These figures can also be used to demonstrate the human capability to make use of top-down contextual in- I a. z/ I- I -- l b. \/ Figure 1: When opportunities for bottom-up grouping of’ image features have been removed, as was done for the line drawing of a bicycle in (a), the drawing is remarkably difficult to recognize. The average recognition time for (a) was over one minute when the subjects had no prior knowledge of the object’s identity. When a single line segment was added in (b), which provided local evidence l’or a curvilinear grouping, the recognition times were greatly reduced. formation to limit the search space for forming a match. As was demonstrated in expcrirnents performed as early as 1935 [3], verbal clues naming even vague non-visual object classes can greatly reduce the recognition time. Subjects can usually interpret Figure l(a) immediately upon being told that it is a bicycle. Thus this figure is on an inter- esting borderline where either bottom-up or top-down in- formation can suddenly reduce the search space and lead to recognition. One can imagine a series of expcrirnents that would systematically explore this search space and the reduction in its size created by different botlorn-up or top- down clues. These figures can also be used to demonstrate the hunran eqlrivalcnt of a back-projection algorit,hrn [(I] fol- lowed by image-level matching, where ccr(.n.in hypothesized partial matches cm bc used to solve for the position, oricn- tation, and internal parameters of the model, which in turn lead to accurate predictions for further nlatxhrs at specific locations in the image. Principles of segmentation There are virtually an infinite number of rclalions that could be forrncd bctwccn Lhc elements of any image. What general principles can we dcrivc for selecting those relations which arc worth forming and for measuring their significnnce? As was rrlcntioncd earlier, scgnlcntations are useful only to the CXI cnt that they rcprcspn t actual struct,urc of the scene rather than accidental align rncn ts. Therefore, a central funcLioll of the segmentation process must be Lo distinguish, as accurately as possible, significant structures from those which have arisen by accident. All of the relations we have considered call arise from accidents of viewpoint or random positioning as well as from structure in the image. However, by rxarnining the accuracy of each relation and the sur- rounding disLribution of featrtrcs in the image, it is possible to give probabili~;l,ic measures of the likelihood that any given relation is accidental. These nonrandomness meaSsures can then bc used as the basic test for significance during the segmentation process. If there were a significant level of prior knowledge regarding the expected distributions of features and rela- tions, this could be used for judging the significance of seg- mcntations. However, the range of common images seems to be so wide that any prior knowledge at this levcll must be very weak. We have chosen to carry out our computation of signi6cancc with respect to the null hypothesis that fca- tures are independent with respect to orientation, location, and scale. Significance is then inversely proportional to the probability that the relation would have arisen from such a set of independent features. It is a matter for psychologi- cal experimentation to see whether the human visual sys- tem is biased in any direction from this independence as- sumption. But since a scene typically contains many inde- pendently positioned objects (leading to independence with respect to orientation, location, and scale in the image), the discrimination of relations with respect to this background seems like a reasonable criterion for judging significance. A second major principle of scgmcntation is that each operation must have limited computational complexity. It is obviously impossible to test all combinations of features in an image, so the rclat,ions can only be formed over dis- tances that do not include too many false candidates of the particular type being examined. Figure 2 shows an example in which a highly significant grouping of five equally-spaced collinear dots is not apparent to human vision when there are enough surrounding false targets. It would presumably be USC~III for the purposes of intcrprctlation and rccogni- tion to detect such a statistically significant grouping, so this failure must be aI,1 ributablc to a lack of computational resources. This does not mean that groupings are diameter- limited in any absolute sense, since groupings can be at- tempted at many different scales; however, if there are more than 2. few false candidates at some scale, then no groupings can be formed at that scale of description. The principles above describe which groupings will bc formed and how they will be evaluated for a given class of relations, but they do not specify which classes of relations will bc attempted. There arc several factors which influence a. b. . . . . . 0 0 e . . 0 . e . e * . 0 Figure 2: The pattern of five equally-spaced collinear dots in (a) is not detected spontaneously by human vision if it is surrounded by enough competing candidates for grouping within the same proximity, as in (b). Th is occurs even though the relation remains highly significant in the et L) atistical sense and would thcrcfore likely be of use for recognition. this choice. One important factor is the same imaging- invariance condition that was mrnt ioned earlier--- it is only wort,h looking for irn:lge relations which do not depend on a specific viewpoint, light-source position, or other irnage- formation paramct,cr. For example, collinearity is useful because it is present in the image over all viewpoints of collinearity in the scene. But it would be pointless to detect lines at right-angles in the image, since even if right-angles are common in the scene the angle in the image would change with almost any change in viewpoint. Witkin and Tenenbaum [12] argue that prior probabilities play a role in selecting which relations are the easiest to distinguish from accidentals, and should therefore be attempted. If some relation arises only very rarely from the structure of typical scenes, then it is more likely that some instance of the relation in an image is accidental (although it would still be possible to distinguish the relation from accidentals given accurate-enough image measurements). Of course, it is also less productive to devote resources searching for properties which seldom arise than for those which are common. An algorithm for curve segmentation A significant bottleneck in creating a computer program which can perform these bottom-up perceptual processes on natural images is the problem of creating appropriately seg- mented edge descriptions. The best current edge operators detect “edge points” which are then linked using ncarest- neighbor algorithms into lists of points. Although there has been considerable research into the problem of fitting smooth curves to these lists of points [9, 10, 111, almost without exception these efforts have concentrated on a single pre-selected resolution of segmentation and have attempted merely to smooth out noise induced by the imaging process. (We use “resolution” in the context of curve segmenta- tion to refer to the allowable transverse deviation from the smoothed curve description.) Although these smoothed 257 results may appear reasonable to the naive human eye, that is because the human visual system can still perform the lower resolution groupings even though they have not been detected and described by the program. Figure 3 il- lustrates the problem, where the segmentation in Figure 3(b) is adequate to recognize one instance of collinearity, but other groupings are only apparent when lower resolu- tion structures arc recognized as in Figure 3(c). We have developed a new algorithm, based on the principles of seg- mentation outlined earlier, which measures the degree of nonrandom structure in edge-point lists over a wide range of resolutions and selects the most significant structures for the curve description. In our implementation, we examine all groupings which are either linear or of constant curva- ture. These can be splincd to represent arbitrary smooth curves, although it is possible that human vision includes the detection of more general primitive curve groupings, such as spirals. Measuring the significance of a curve segmentation The first task in developing a segmentation algorithm is to determine how we will measure the significance of each grouping. In this case, since the points were originally linked on the basis of proximity, we must be careful not to confuse nonrandomness in proximity with the measurement of nonrandomness in linearity. For example, if we start by looking at a set of only three points, we might measure the significance of their linearity by measuring the distance of one point from the line joining the other two. However, this would confuse the effects of proximity with those of linearity, since by being close to one of the other points the third point would automatically be close to the line on which they lie, as is shown in Figures 4(a) and 4(b). Therefore, we have chosen to define nonrandomness in linearity to be how unlikely a point is to be as close as it is to a curve given its distance from the closest defining point of the curve. This is equal to 20/n, where 8 is the angle between the curve and the vector from the closest endpoint, as shown in 4(c). This can be extended to 4 or more points by recursively looking for the point which is farthest away from any of the points considered thus far and calculating the likelihood for that point in terms of its minimum proximity to these previously considered points. Since these likelihood values are independent, they can be multiplied together to produce an overall value for the curve. Testing all possible groupings Given this significance test for sets of points, we want to divide the initial linked list of points into segments which have the highest significance values. Previous methods of curve segmentation have usually attempted to search for corners (tangent discontinuities) on a curve, where the curve can bo divided into different segments. Our approach is the dual---WC look for segments of the cwve which exhibit significant nonrandomness, and tangent and curvature dis- continuities are assigned to the junctions between neighbor- ing segments. In contrast to the earlier approaches, our method will fail to assign any segmentation where the curve - - b. \ 6. \ Figure 3: The data in (a) can be segmented at at least two different resolutions of description, as shown in (b) and (c). One instance of collinearity can only be detected in segmentation (b) while the other instance of collinearity and the parallelism can only be detected straightforwardly in (c). F’igure 4: The middle dots in (a) and (b) are both the same distance from the dashed line joining t,he other two dots. Yet the three dots in (b) are much more significant in terms of their collinearity than those in (a), since the middle dot in (a) could be close to the line merely as a result of its proximity to the first endpoint. Therefore, we measure the probability of a l)oint being within a given distance from a line in terms of its proximity to the clo.;est endpoint defining the line, as shown in (c). 258 Figure 5: The small 30 by 45 pixel region of an aerial photograph shown in (a) was run through the Marimont edge detector to produce the linked edge points shown in (b). lcigure (c) shows all the segments at different scales and locations v;hich were tested for significance. After selecting only t’hose sc;mrnts which were loca!ly rnaximurn with respect to size of grouping, and threshold- ing out those which arc not statistically significant, we are left with the segments shov~n in (d). It is t.hen fairly simple ho form collinearity and curvilincarity relations between t!rese scgrnents as shown by dotted lines in (e). Figure 6: The hand-input curves in (a) have been created to cx- hibit significant structure at multiple resolutions. When these are given as data to the curve segmentation algorithm, it produces the results shown in (b), which makes these multiple levels of structure explicit. 259 appears to wander randomly at all resolutions, and will as- sign multiple segmc;ltxtions whcro it exhibits diCfercr;t struc- turc at diffcrcnt resolir Lions. It would clearly bc too costly to test every possible scgmcnt of the curve for nonrandomncss. Ilowcvcr, if we allow a rcasonal~le margin of error, it is possible to cover all scales and locations with a relatively small number of groupings. We examine groupings at all scales differing by factors of two, from groupings of only three adjacent points up to groupings the size of the full length of the curve (amounting to 6 scales for a curve of 100 points). At each scale, we examine groupings at all locations along the curve, with adjacent groupings overlapping by 50%. This means that any given segment of the curve will have at least one grouping attempted which covers 50% of its length but does not extend outside its borders. After measuring the significance of each grouping, a thinning procedure is executed which steps through the different resolutions at each location along the curve and selects only those segmentations which arc locally maximum in their signilicancc values. It is possible that there will be more than one local maximum if the curve exhibits different structures at different resolutions of grouping. There is also a t,hreshold at the 0.05 significance level, below which group- ings are not considered significant. The algoriLhm in action This algorithm have been implemented in MA~I,ISF’ on a KL- 10 computer and tested on synthetic data as well as edges derived from natural images. Figure 6(a) shows some hand- drawn curves which exhibit different structures at different resolutions, much as was shown in Figure 3. Figure G(b) gives the output of the curve segmentation algorithm when given this data, a,nd demonstrates the algorithm’s ability to dctcct significant structure at multiple resolutions--results which no single-resolution algorithm could have produced. Figure 5 shows the results of running the algorithm on a small 30 by 45 pixel region of an aerial photograph of an oil tank facility. The original digit,ized image is shown in 5(n). Figure 5(b) h s ows some linked edge data generated from this image by an edge detection program written by David Marimont (61, which dotccts edge points to subpixcl accuracy and links them into lists. Figure 5(c) shows all the groupings at all resolulions, although the widely differing significance values are not apparent. Figure 5(d) shows the results after the thinning process which sclccts local ma.xima with respect to resolution. Given these :;cgmcnts, it is rela- tively easy to form collinearity and curvilinearity relations between them as shown by the dotted lines in Figure 5(c). It would also he fairly straightforward to detect endpoint proximity, parallelism, constant intervals, and other perccp- tusl groupings. Summary We began this pa.per by demonstrating the importance of bottom-up perceptual organization for human vision. These image relations play a major role in limiting the size of the search space that must be considered when matching against world knowledge. The unifying principles of detecting non- random structure, avoiding combinatorial complexity, and looking for viewpoint-invariant relations were suggested. An algorithm for curve segmentation, based upon these prin- ciples, was dcvclopcd and demonstrated. There arc many other problems besides recognit,ion in which these groupings would be useful. An example is the stereo correspondence problem, since to the extent that these image relations rep- resent structure in the scene and arc invariant with respect to viewpoint, they will be detected in images taken from different viewpoints. The specific algorithms developed arc preliminary im- plementations of the general methodology of segmenting pcrccptual data by looking at groupings over a wide range of scales and locations and retaining those which are the most unlikely to have arisen by accident from the background distribution. This same methodology could be applied to a wide range of other pcrccptual segmentation problems or signal analysis. Acknowledgements We would like to thank Peter Blicher, Chris Goad, David Mari- mont, Marty Tenenbaurn, Brian Wandcll, Andrew Witkin, and our other colleagues for stimulating cliscussions and suggestions. This work was supported under AlU’A contract NOO039-82-C- 0250. The first author was also supported by a postgraduate scholarship from the Natural Sciences and Engineering Research Council of Canada. References [l] Binford, Thomas O., “Inferring surfaces from images,” Artifi- cial Intelligence, 17 (1981), 205-244. [2] Brooks, Rodney A., “Symbolic reasoning among 3-D models and 2-D images,” Artificial Intelligence, 16 (1981). [3] Leeper, R., “A study of a neglected portion of Icarning- the development of sensory organization,” Journal of Genetic Psychology, 46 (1935), 41-75. [4] Lowe, David G., ‘Solving for the parameters of object models from image descriptions” Proceedings ARPA Image Under- standing Workshop (College Park, MD, April 1980), 121-127. [5] Lowe, David G. and Thomas Binford, “The interpretation of three-dimensional structure from image cnrvcs,” Proceedings UC/U-7 (Vancouver, Canada, August 1979), 613-628. [G] Marirnont, David, “Segmentation in ACRONYM,” Proceed- ings ARPA Image Understanding Workshop (Stanford, Calif- ornia, September 1982). [7] Marr, David, “Early processing of visual information,” Philo- sophical Transactions of the Royal Society of London, Series R, 275 (1976), 483-524. [S] Nevatia, R., and T.O. Binford, “Description and recognition of curved objects,” Artificial Intelligence, 9 (1977). [9] Pavlidis, ‘I‘., Structural Pattern Recognition (New York, NY: Springer-Verlag, 1977). [lo] Rutkowski, W.S., and hzriel Rosenfeld, “A comparison of corner-detection techniques for chain-coded curves,” TR-623, Computer Science Center, TJniversity of Maryland, 1978. [ll] Shirai, Y., “Recognition of man-made objects using edge cues,” Computer Vision Systems, A. Hanson, E. Riseman, eds. (New York: Academic Press, 1978). [12] Witkin, Andrew P. and .Jay M. Tenenbaum, “On the role of structure in vision.” To appear in Human and Machine Vision, Roscnfeld and Beck, eds. (New York: Academic Press, 1983). 260
1983
75
272
From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved.
1983
76
273
T. E. Weymouth, J. S. Grifftth, A. R. Hanson and E. M. Rtseman Department of Computer and hformatbn Z%kme, htversity of Massachusetts at Amkfstl We present an interpretation system which utilizes world knowledge in the form of simple object hypothesis rules, and more complex interpretation strategies attached to object and scene schemata, to reduce the ambiguities in image measurements. These rules involve sets of partiahy redundant features each of which defines an area of feature space which represents a “vote” for an object. Convergent evidence from multiple interpretation strategies is organized by top-down control mechanisms in the context of a partial interpretation. One such strategy extends a kernel interpretation derived through the selection of object exemplars, which represent the most reliable image specific hypotheses of a general object class, resuhing in the extension of partial interpretations from islands of reliability. 1. Intmducffon The use of world knowledge, together with top down control, is beneficial and probably essential in domains where uncertain data and intermediate results containing errors canuot be avoided. Ambiguity and uncertainty in image interpretation tasks arise from many sources, including the inherent variation of objects in the natural world (e.g., the size, shape, color and texture of trees), the ambiguities arising from the perspective projection of the 3D world onto a 2D image plane, occltion, changes in lighting, changes in season, image artifacts introduced by the digitization process, etc. Nevertheless, even with marginal bottom-up information, in familiar situations human observers can infer the presence and location of objects. We present a system where convergent evidence from multipIe interpretation strategies is organized by top-down control mechanisms in the context of a partial interpretation. The extreme variations that occur across images can be compensated for somewhat by utilizing an adaptive strategy. This approach is based on the observation that the variation in the appearance of objects (region feature measures acre images) is much greater than object variations within an image. The use of exemplar strategies using initial hypotheses and other top-down strategies results in the extension of partial interpretations from islands of reliability. Finally a verification phase can be applied where relations between object hypotheses are examined for consistency. In this paper the interpretation task examined is that of labehg an initial region segmentation of an image with object (and object part) labels when the image is known to be a 1. TnisRcz;;h Advanced was. supported in pgderby Defense pro)ectS Agency contract NOM-82-K-0464, the National Science Foundation under Grant MCS79-18209, and the Air Force Office of Scientific Research under Contract F4%2@83GUO99. member of a restricted class of scenes (e.g., suburban house scenes). The systems developed by Ghta [q for understanding images of buildings in outdoor settings and by Nagao [4] for understanding aerial photographs bear some similarity to the techniques employed here. A review of these and other related work in image interpretation appears in [l]. 2. A owledge Network and &?pre.wmatiQn u Description of scenes, at various levels of detail, are captured in a set of schema hierarchies (31. A schema graph is an organizational structure defining an expected collection of objects, such as a house scene, the expected visual attributes associated with the objects in the schema (each of which can have an associated schema), and the expected relations among them. For exampIe, a house (in a house scene hierarchy) has roof and house wall as subparts, and the house wall has windows, shutten, and doors as subparts. The knowledge network shown in Figure 1 is a protion of a schema hierarchy as developed in [6]. Each schema node (e.g. house, house wall, and roof) has both a structural description appropriate to the level of detail and methods of access to a set of hypothesis and verification strategies called interpretation strategies. For example, the sky-object schema (associated with the outdoor-scene schema) has access to the exemplar selection and extension strategy discussed below. al - I 1 @ure 1: Abstract representation of a portion of the knowledge network. of Schema are organized by the component descriptions subchtss and subpart, and by spatial relations. Each schema node has access to a set of interpretation rules which form hypotheses on the bases of image measurements, and interpretation strategies which describe how these hypotheses are combined with information in the network to form a consistent interpretation. 429 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Interpretation rules relate image events to knowledge events by providing evidence for or against part/subpart hypotheses. An interpretation strategy, associated with a schema node, specifies in procedura1 form how specific interpretation rules may be applied, and how combined results from multiple rules may be used to decide whether or not to “accept” (i.e., instantiate) an object hypothesis. An interpretation strategy thus represents both control IocaI to the node and top-down control over the instantiation process. Note that the goal is not to have these interpretation rules and strategies extract exactly the correct set of regions. Our philosophy is to allow incorrect, but reasonable, hypotheses to be made and to bring to bear other knowledge (such as various similarity measures and spatial constraints) to fiber the incorrect hypotheses. An example of such error detection and correction in the interpretation process wilI be given in Section 5. 3. Role Form for Object Hypotheses Under Uncerta!nty We wiII illustrate the form of a simpIe interpretation rule based on using the expectation that grass is green. The feature used is average “excess green” for the region, obtained by computing the mean of 2G-R-B for all pixels in this region. Histograms of this feature are shown in Figure 2, comparing all regions to alI known grass regions across 8 samples of color outdoor scenes. An abstract version is shown in Figure 3. The basic idea is to form a mapping from a measured value of the feature obtained from an image region, say fl, into a “vote” for the object on the basis of this singIe feature. One approach to defining this mapping is based on the notion of prototype vectors and the distance from a given measurement to the prototype, a welLknown pattern classification technique which extends to N-dimensional feature space [3]. In our case rather than using this distance to “cIassify” objects in a pure Bayesian approach that is replete with difficulties, we translate it into a “vote”. F&ure 2. Image histograms of an “excess green” feature (26-R-B) computed across eight sampIe images. The unshaded histogram represents the global distribution of the feature. The darkest cross hatched histogram is the distribution of this feature across regions known to be grass (from a hand labeling of the images) in one of three specific images. The intermediate cross hatching represents all known grass regions across the entire sample. Note the shifting (with respect to the fulI histogram) of the histograms for the individual images. Let d(fp,fl) be the distance between the prototype feature point fp and the measured feature value El. The response R of the rule is then 1 1 if d(fpsQ s e1 WI) = I s, - d(fpfl) @2 - 91 if C!I1 < d(fl,fI) s @2 0 if 92 < d(fpfI) s e3 -00 if % < d(fpf1) The thresholds 631,@+2, and @2 represent a gross mapping from the feature space to a score value that provides an interpretation of the distance measurements. 633 aknvs strong negative votes if the measured feature value impries that the hypothesized object cannot be correct. For example, fairly negative values of the excess green feature imply a color which should veto the grass IabeI. Thus, certain measurements can exclude object labels; this proves to be a very effective mechanism for filtering many spurious weak responses. Of course there is the danger of excIuding the proper label due to a singIe feature value, even in the face of strong support from many other features. In the actual implementation of this rule form, @I, @32, and 632 are replaced with six values so that non-symmetric rules may be defined as shown in Figure 4. There are many possibilities for combining the individual feature responses into a score; here we have used a simple weighted average. all objects and Fe= 3. Structure of a simpIe rule for mapping an image feature measurement El into support for a label hypothesis on the basis of a prototype feature value obtained from the combined histograms of labeled regions across image sampIes. The object specific mapping is parameterixed by four values, fp, 631, 0.2, C93, and stored in the knowledge network. The use of six vahes wiIl alIow an asymmetric response function. Figon 4. An example grass rule, showing an asymmetrical structure, superimposed on the histogram of Figure 2, 430 4.~eu4ks and Islands olRd!abtllty The extreme variations that occur across images can be compensated for somewhat by utilizing an adaptive strategy. Variation in the appearance of objects (region feature measures across images) is much greater than object variations within an image (see Figure 2). In the initial stages, there are few if any image hypotheses, and development of a partial interpretation must rely primarily on general knowIedge of expected object characteristics in the image and not on the relationship to other hypotheses. The most reIiabIe object hypotheses, derived from the interpretation rules, can be considered object “exemplars” and form basis of adaptation. One strategy extends the kernel interpretation by using the features of IabeIIed exemplar regions including color, texture, shape, size, image location, and reIative location to other objects. This is similar to the method in [4], where “characteristic regions” were used to guide hypothesis formation in the early stages of interpretation. The exemptar region (or set of regions) forms an image-specific Pototype which can be used with a similarity measure to select and Label other regions of the same identity. A verification phase can be applied where relations between object hypotheses are examined for consistency. Thus, the interpretation is extended through matching and processing of region characteristics as weIl as semantic inference. Exemplars can be used more generally than we have presented them here. For example a house walI showing through foliage can be matched to the unoccluded visibIe portion based UP COIor similarity and spatial constraints derived from inferences of house wall geometry. The shape and/or sixe of a region can be used to detect other instances of multiple objects, as in the case of finding one shutter or window of a house, one tire of a car, or one car on a road. Additional spatial and perspective constraints can also be employed in object recognition. Exemplar hypothesis rules differ from general hypothesis rules in that they are more conservative; they should minimize the number of EaIse hypotheses at the risk of missing true target regions by narrowiug their range of acceptable responses. If all regions are vetoed, secondary strategies are invoked; for example, the veto ranges can be relaxed, admitting less reliable exemplars. Figure 5 compares the rest&s of the grass exemplar ruIe with the genera1 grass hypothesis rule. The strategy can also be used to generate lists of hypotheses ordered by reliabiity. ngnn 5. J’he exemplar hypothesis rule is more selective than the corresponding general interpretation rule (based on a less selective rule form). Figure Sa shows the general grass interpretation ruIe, while Figure Sb shows the exemplar rule. Note that the general form of the ruIe rest&s in more inCorrect region hypothesis (which could be filtered by constraints from the knowledge network). Although the examplar rule misses some grass regions, those found have high confidence. The advantages of using object exemplars include: 1) an effective means for extending ml&able hypotheses to regions which are more ambiguous; this is similar to the notion of “bilands ot realbulty~ [2]; 2) a know! ted technique for partially dealing with the unavoidable region fragmentation that occurs with any segmentation algorithm or low-Ievel image transformation/grouping; regions that are “similar* to the exemplar can be both labelled and merged; 3) exemplars play a natural role in the implementation of an hypothesbz+and-verify eontml strategy; hypotheses are formed based upon initial feature information and subsequently can be used in a verification process where the relationship between Labelled regions provides consistency checks on the hypotheses and the evolving interpretation. 5. R&t5 0e RuIe Based Image hterpretattoa Experiments are being conducted on a set of eight “house scene” images. Thus far, we have been able to extract sky, grass, and foliage (trees and bushes) from five house images with reasonabIe effectiveness, and have been successful in identifying houses and their parts, including shutters (or windows), house wail and roof in three of these images. The interpretation strategies use many redundant features, each of which can very often be expected to be present. The premise is that many redundant features allow any single feature to be unreliable. Object hypothesis rules were employed as described in previous sections, while object verification rules requiring consistent relationships with other object labels are under current development. The final results shown in Figure 6 are an interpretation based on coarse segmentations. Further work on segmentation is being carried out, as is the refinement of the exemplar selection and matching rules (that were shown in section 3). An extremely important capability for an interpretation system is feedback to lower level processes for a variety of Purposes. The interpretation processes should have focus-of-attention mechanisms for correction of segmentation 6. Example interpretations for three house scene images. The labeling is: SKY GRASS FOLIAGE HOUSE WALL HOUSE ROOF S~R/wINDOW UNLABELED cl 431 errors, extraction of finer image detail, and verification of semantic hypotheses. An example of the effectiveness of semantically directed feedback to segmentation processes is shown in Figure 7. There is a key missing boundary between the house walI and sky which leads to incorrect object hypotheses based upon local interpretation strategies. The region is hypothesized to be sky by the sky strategy, while appIication of the house wall strategy (using the roof and shutters as spatial constraints on the location of house wall) leads to a waII hypothesis. There is evidence avaiIabIe that some form of error has occurred in this example: 1) conflicting labels are produced for the same region by Iocal interpretation strategies; 2) the house wail label is associated with regions above the roof (note that while there are houses with a wall above a lower roof, the geometric consistency of the object shape is not satisfied in this example); and 3) the sky extends down close to the approximate horizon line in only a portion of the image (which is possible, but worthy of closer inspection). In this case resegmentation of the sky-housewall region, with segmentation parameters set to extract finer detail, produces the results shown in Figure 7a. Subsequent remerging of similar regions produces a usable segmentation of this region as shown in 7b. It should be pointed out that in this image there is a discemabIe boundary between the sky and house wall. Initially, the segmentation parameters may be set so that the initial segmentation misses this boundary. This may occur because of computational requirements (fast, coarse segmentations) or as an explicit control strategy. However, once it is resegmented with an intent of overfragmentation, this boundary can be detected. Remerging based on region means and variances of a set of features alIows much of the overfragmentation to be removed. Now, the same interpretation strategy used eariler produces quite acceptable results shown in Figure 8. The current development of interpretation strategies involves the utihxation of stored knowIedge and a partial model (IabeIIed II I Ftgan 7. Resegmentation of house/sky region from Figure 6c. Figure 7a is the original segmentation showing the region to be resegmentated; 7b shows the regions resulting from the selective application to the segmentation process to the cross-hatched area in 7a. Flgura 8. Final interpretation of the house scene in Figure ,6c, after inserting resegmented houes/sky regions and reinterpreting the image. regions) for hypothesis extension. In these strategies the knowledge network is examined for objects that can be inferred from identified objects, and for reIations that would differentiate them. For example, the bush regions can be differentiated from other foliage based on their spatiaI relations to the house, and front and side house walls can be differentiated using geometric knowledge of house structure (e.g., relations between roof and walls), as shown in Figure 9. In the fuII system, these rules would not work in isolation as shown here, and the errors made by this type of rule would be filtered by other constraints. Future work is directed towards refinement of the segmentation algorithms, object hypothesis rules, object verification ruIes, and interpretation strategies. System deveIopment is aimed towards more robust methods of control: automatic schema and strategy selection, interpretation of images under more than one genera1 class of schemata, and automatic focus of attention mechanisms and error-correcting strategies for resolving interpretation errors. [l] Binford, T., “Survey of Model Based Image A.naIysis Systems,” International Journal of Robotics Research, Volume 1, Number 1, Spring 1982, pp. 18-64. [2] Erman, L., HayesRoth, F., Lesser, V. and Reddy, D., “The Hearsay-B Speech-Understanding System: Integrating Knowledge to Resolve Wncertainty,” Computing Surveys, l2(2), June 1980, pp. 2l3-253. [3] Hanson, A. and Riseman, E., “VISIONS: A Computer System for Interpreting Scenes,” in Computer Vision Systems (A. Hanson and E. Riseman, eds.), Academic Press, 1978, pp. 303333. [4] Nagao, M. and Matsuyama, T., A Structuruf Analysis of Complex Aerial Photographs, PIenum Press, New York, 1980. [5j Ohta, Y., “A Region-Oriented Image-AnaIysis System by Computer,” Ph.D. Thesis, Kyoto University, Department of Information Science, Kyoto, Japan, 1980. [6] Parma, C., Hanson, A. and Riseman, E., “Experiments in Schema-Driven interpretation of a NaturaI Scene,” COINS Technical Report number 8@10, University of Massachusetts, 1980. Also in Nate Advanced Study Institute on Digital Image Processing (R. Haralick and J.C. Simon, eds.), Bonas, France, 1980. t( ) a iPQmre 9. An example of the use of spatial relations to filter and extend region labeling. The geometric relations between house and shrub (in 9a) and between between roof and house front wall (in 9b) are used to refine region hypotheses from the interpretation shown in Figure 6c. Note that there are stiIl ambiguities (the shrub label in the grass area, and the pants Iabeled as house wall) that require the use of other filters. 432
1983
77
274
LEARNING OPERATOR SEMANTICS BY ANALOGY Sarah A. Douglas Stanford University and Xerox Palo Alto Research Center* Abstract This paper proposes a cognitive model for human procedural skill acquisition based on problem sol\ing in problem spaces and the use or’ an&)g! for buildmg the reprcscnration of operator semantics. Protocol data of computer-naive subjects learning the EhlACS text editor suaests that they use their knowledge of typewriting to decide Hhich commands to use in performing editing tasks. We propose a formal method of analysis that compares operators in two problem spaces (based on posrcondirion similarit)) and generates misconceptions (based on pre- and postcondition differences). Comparing these predicted misconceptions bith error data and \.erbal comments in problem sohing episodes validates this analysis. The Phenomena and the Question Analysis of several experimental protocols of computer-naive people learning the EMACS text editor suggest that they were reasoning by analogy from their knowledge of typewriting. The context of text editing spontaneously evokes the analogy to typewriting, because of the similarity of the keyboards, the similarity of the computer screen to the typed page, and the similarity of the tasks in editing and typing. The use of the typewriter analogy was also prompted by the teachers, with remarks such as: “Yeah, go ahead. It works just like a regular typewriter.” The learners’ verbal data suggest that they were indeed taking this advice. For example: The task is to move to the beginning of a word. The teacher says: “Hitting the space bar still doesn’t move you ahead one space.” Learner: “That’s part of my problem, because on a regular typewriter, you can just zip right in there and do your thing.” Such verbal comments suggest that the learners were engaged in problem solving and were using the analogy to the typewriter to figure out what editing operations were appropriate. As scientists, we should be skeptical about taking these verbal statements at face value. The question is whether this apparent use of the typewriter analogy actually plays a significanl role in the learning and performance, Our strategy to examine this question is: Given a general model of analogical learning consistent with the protocols, we develop a specific analysis of the misconceptions that should arise in trying to import knowledge from the typewriter domain to the editor domain. We then test whether these predicted misconceptions are supported by the learners’ performance data. * Beginning September 1983, address will be: Sarah A. Douglas; Department of Computer Science; University of Oregon; Eugene, of? 97403. Thomas P. Moran Xerox Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, CA 94304 A Model of Learning by Analogy The learner is trying to acquire the cngniti\e skill required for expert ~$2 of a text editor. lext editing skill can be represented as a probh space (Card, Moran, & Ncuell, 1983. Ch. 11). The initial learning task is to build such a problem space. This is done incrementall!. not by some sort of pure induction. but rather by borrowing skills from other related domains, which we also consider to be represented as problem spaces. Learning begins with the learner putting together a rough problem space for text editing from the teacher’s instructions. But, when confronted with editing tasks to do. the learner finds that the rough problem space is not yet good enough to support effective problem solving. We propose that the hardest aspect of learning the problem spaces associated with computer systems is in understanding the operators (i.e., commands). The operaror semantics, the detailed specifications of how the operators affect the system’s conceptual entities, is intricate in computer systems. Thus, the difficulties with the learner‘s rough initial problem space of editing is due to incorrect knowledge of operator semantics. What the learner does is borrow operators from the typewriter space and apply them in the editing space, which causes unexpected results. This is the source of learners’ misconceptions about the editor. These misconceptions appear in the learning data as errors, verbal questions, mis- understandings, and inabilities to perform certain types of tasks. This view differs from other proposals about analogy in AI (e.g., Brown, 1977: Carbonell. 1982; Van Lehn and Brown, 1980; Winston, 1981) and psychology (e.g., Gentner, 1983; Gick & Holyoak, 1983), which involve holistic mappings between domains. For example, Brown (1977) and Winston (1981) map methods from the known domain as entire plans in the new domain. Our model is essentially a scheme for the piecemeal borrowing of fragments of information from the known domain. The problem space formulation is useful, since it provides semi-independent, borrowable procedural units-the operators-which we propose is where most of the misconceptions arise. Prediction of Specific Misconceptions We have formalized problem space representations of the typewriter and of the EMACS text editor (Douglas, 1983); and we have developed a method of analysis for comparing the operators in these representations to predict specific learner misconceptions. Problem space representation. For the typewriter, the primary entity is a spatial array of cells on a page. in which each cell is either BLANK or CONTAINS a character. Text entities (words. sentences, paragraphs, etc.) are delineated b) blank space. On the other hand, for the EMACS text editor there are two distinct entities. a display screen and a text wing, an internal, invisible sequence of characters. 100 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. The screen is gcnerared from the text string b! the editor’s DISPLIZ\’ operation. In addition to text characrcrs. the text string also contains formatting characters (SPACE-CHAR. RE’IURN- CH[\R. and TAB-CHAR) which control where text characters are displd! cd. lhe formatting chdrditers Jlso occup! sp&J cells on the screen, but they are invisible. Thus. a blank cell on the screen cm be either EMPTY’ or contain an lNL’\‘ISIBLE character. While the screen looks like a typed page to the learner. its behavior is very different. This behavior can be described by comparing operators in the two domains. We represent operators by precondition and postcondition state predicates. For example, consider the typewriter operator for typing a new character on the page, ADD (Charne, 1: Preconditions POINTER-AT (Gem) BLANK (Cell) Postconditions POINTER-AT (NEXT-RIGHT (Cell)) CONTAINS (Cell, Char,,,) The preconditions are that the typewriter POINTER must be AT a BLANK cell. The effect of the operator is that the cell CONTAINS the typed character and the POINTER is moved to the NEXT- RIGHT cell. Next, consider the corresponding EMACS operator for typing a new character, INSERT (Charnew): Preconditions *SEQUENCE (Chari, Char& *SELECTED (Char& POINTER-AT (CeZ$) Postconditions *SEQUENCE (Chari, Char,, , Char& *SELECTED (ChaQ POINTER-AT (LOCATION-OF (ChaQ) (DISPLAY (CeZ$, *TEXT-FOLLOWING(Chari))} (The predicates marked with asterisks refer to the underlying text string and hence are invisible to the user.) The preconditions for the EMACS operator are that the editor’s POINTER be positioned AT the cell containing the character following the new character (i.e., an “insert before” operator). There is no precondition for blankness. The effect of the operator is to DISPLAY the new character and to re-DISPLAY all the TEXT-FOLLOWING it (which can have complex effects on the screen). We have in this manner formulated problem spaces for both the typewriter and for EMACS (see Douglas, 1983, for details). Comparing similar opemtors. In the process of trying to understand E%4ACS operators, the learner borrows typewriter operators that are similar to the EMACS operators. We have developed a method of analysis that uses two criteria of similarity: (1) surface feature similarity, which in this case is whether the operators utilize the same keyboard key, and (2) similar effects (e.g., postcondition match). Figure 1 shows all the similarity links between operators. For example, consider link M8 between the ADD and INSERT operators (which are described above). The postcondition of ADD is that a cell CONTAINS the new character. The postcondition of INSERT has a DISPLAY operation which produces a set of CONTAINS predications, including one for the new character. Thus, the postconditions for ADD and INSERT match. Once npcr,ltors have bccr~ matched by similarit!. their predicate descriptions cdn bc compdrcd for differences in order to gcneratc a taxonomy of potential misconceptions. As both Halasz and Moran (1983) and Gentner (1983) have pointed out. the harmfulness of using ,nlJlog! ,is a rcJching de\ ice is the mabilit! of no\ ices to distinguish the differences from the similarities. that is. novices tend to overextend similarities. thus causing misconceptions. There are two sources of misconceptions when using one operator for another: (1) unknown preconditions and (2) unknot n postconditions. Figure 2 summarizes these predicted misconceptions. grouped by the similarity links in Figure 1. These subtle differences are what makes it difficult to learn by analogy. For example, when the learner borrows the ADD operator for the INSERT operator. the BLANK cell precondition comes with the ADD. Thus. we would expect the learner to try to find or create a blank cell before inserting a character. This particular misconception, assuming the BLANK precondition for inserting. is listed as h48a in Figure 2. An interesting feature of the similarity comparison between typewriter and EMACS operators in Figure 1 is that operators are grouped into two classes: locafive (for moving the pointer around) and mufafive (for changing text entities). The figure shows that three locative operators from the typewriter are similar to mutative text editor operators, which insert formatting characters. Some of the major misconceptions for the learners involve an “ontological shift” from locative to mutative operators. Validation of Predictions with Empirical Data Our data is taken from the experiments reported by Roberts and Moran (1983). We use detailed protocol records of the learning sessions from their EMACS learning experiments. The experimental teaching paradigm is tutorial, with a single teacher and a single learner. Interspersed in the paradigm are several quizzes, during which the learner performed typical editing tasks without the help of the teacher. We have two kinds of protocol data to work with--llerbal dura, which was transcribed from audio tape, and performance data, consisting of computer-collected time-stamped keystrokes-from which we can see the course of learning in detail. Perfomance error data. Figure 2 presents the frequency of learner errors observed in the performance data for the first two quizzes of the learning sessions. The error data shows that semantic errors are a significant portion of early skill acquisition (75 out of 105 errors observed). It can be seen that the overall coverage by the above analysis is good (covering 62 of 75 semantic errors). Errors involving the “ontological shift” (misconceptions M5 through M7) are very frequent, accounting for 30 of the observed errors. Verbal protocol example. Another use of these analytically- generated misconceptions is to explain some of the confusions found in verbal protocols, such as the example at the beginning of this paper. In this example, the misconception of inserting an invisible formatting character is evident. The novice uses the space bar to move across the screen without understanding that she is inseking a space charucrer into the text string (misconception M5 in Figure 2). For some reason she forgets to use the text editor command (which she knows and has used many times before) to move over one horizontal unit. The effect of inserting invisible characters is very surprising to her, and she cannot explain why the cursor maintains the same distance from her target location while moving further to the right across the screen. She is frustrated in her attempts to “‘just zip right in there.” This example also illustrates how the interactive nature of the text editor reduces the credit/blame assignment problem, since the novice can see the immediate effects of applying the wrong operator and can localize where to correct the problem. 101 TYPEWRITER EMACS Rll KEST-C-ELI, - - - 7 NFST-CH.\R (ty pc SPACE-BAR) \ (type CONTROL F) \ PREY-CELL \ R12 \ PRE\‘-CH.4R (type BS-KEY) \ \ (type CONTROL B) SKIP-TO-NEXT-TAB - ‘-\ ‘, (type TAB-KEY) \ \ ’ \ RETURN-CARRIAGEc~ ’ ‘\ PREY-LINE (type RETURN-KEY) ‘v ‘, v, (type CONTROL P) \ \ ’ \ \ \ \ ’ \ \ \ locative operators \ 1 \ \ \ \ ‘\+$ mutative operators \ ADD( Chmcte~ - T \ (type Character) ‘N \ “$T INSERT ( Chamctetj \ is ’ ‘\ \A \ \ \ \ L - -INSERT (SPACE-CHAR) ‘Y% ’ ‘1 (type SPACE-BAR) \ ’ \ \ ’ s \ ’ L \ \ --INSERT (TAB-CHAR) \ \ \ \ (type TAB-KEY) \ \ \ \ L \ - -INSERT (RETURN-CHAR) \ \ (type RETURN-KEY) \ \ L - -INSERT (Text-ctiructe4 (type Text-character) ERASE-CELL (Cerr) (use white-out on Cell) -e$T--EZAR ERASE-PREY-CELL (Cell) (type CORRECTION-KEY) (type RUBOUT-KEY) Figure 1 Similarity Links between Typewriter and EMACS Operators (Solid lines indicate similar-action relations; and dashed lines indicate similar-action and same-key relations.) Conclusions Our initial observation, that text editor novices rely heavily on their knowledge of typewriting to understand the semantics of text editor operators, accounts for learners’ performance errors resulting from misconceptions and for many confusions found in the verbal protocols. We have presented an analysis technique based on problem spaces for generating a taxonomy of analogical misconceptions. The analysis identifies analogous operators by similarity of postconditions (effects) and predicts misconceptions by overextending the similarities to other operator postconditions and to operator preconditions. The use of analogy for teaching has both advantages and disadvantages. Individual situations must be analyzed to determine whether the net balance of using an analogy is for the good. In this paper we have attempted to develop an analysis technique to predict the specific effects of using a specific analogy. This operator mapping technique is a potentially useful design tool for elaborating the conceptual complexity resulting from the interaction of new knowledge of a system to be learned with previous knowledge of other systems. 102 Predicted Frequency of Misconceptions Observed Errors Rll. NEXT-CELL I--- NEST-CH.1R a. Ignorance that NEIIT-CHAR moves by characters, not cells of text string. 6 M2. PREY-CELL -1-1 PREY-CHAR a. Ignorance that PREV-CHAR moves by characters, not cells. - M3. NEXT-CELL-BELO\Y ---1 NEXT-LINE a. Ignorance that NEXT-LINE moves by characters and columns, not cells. - M4. RETURN-CARRLIGE ---- NEXT-LINE a. Ignorance that NEXT-LlNE moves by characters and columns, not cells. 1 M5. NEXT-CELL ---- INSERT(SP.ICE-CHAR) a. Ignorance that cell contains an invisible SPACE-CHAR. 7 M6. SKIP-TO-NEXT-T.4B ---- INSERT(ThB-CH.4R) a. Ignorance that TAB-CHAR is an invisible character, not a cell. 4 M7. RETURN-CARRI.4GE ---I INSERT(RETURN-CHAR) a. Ignorance that RETURN-CHAR is an invisible character, not a cell. 19 R18. ,4DD( Chamcte~ ---- INSERT( T~xr-ci4amcte~ a. Assumes typewriter precondition BLANK(Cell). 9 b. Ignorance that postcondition inserts character into string, not cell (i.e. no strikeover). 2 c. Ignorance that postcondition inserts character in front of currently selected character. 4 d. Ignorance that postcondition re-displays text to the right of the insertion point. - M9. ER.4SE-CELL ---I DELETE-CHAR a. Assumes typewriter precondition CONTAINS(Cell, Charac&r). - b. Assumes typewriter postcondition BLANK(Cell) instead of re-displaying text string. 3 MlO. ER.ISE-CELL ---- DELETE-PREY-CHAR a. Assumes typewriter precondition CONTAINS(Cel1, Character). - b. Assumes typewriter postcondition BLANK(Cell). - c. Ignorance that postcondition deletes character in front of currently selected character. 7 Ml 1. ER.4SE-PREV-CELL ---- DELETE-PREY-CHAR a. Assumes typewriter precondition CONTAINS(Cel2, Charac1er). - b. Assumes typewriter postcondition BLANK(CeZl). - Observed errors covered by the predicted misconceptions: 62 Other observed semantic errors not covered by the predicted misconceptions: Observed syntactic errors (not colered by the predicted misconceptions): Obsened typing errors (not corered by the predicted misconceptions): ‘13 28 2 Figure 2 Predicted Misconceptions and Frequency of Observed Performance Errors for Four EMACS Learners References Brown, R. Use of analogy IO achieve new expertise (AI-TR-403). Gick, M.L. & Holyoak. K.J. Schema induction and analogical Boston. MA: MIT, Artificial Intelligence Lab, 1977. transfer. Cognitive Psychology, 1983, 15, l-38. Carbonell, J . Learning by analogy: Formulating and generalizing plans from past experience. In R.S. Michalski, J.G. Carbonell, & T.M. Mitchell (Eds.), Machine Learning. Palo Alto, CA: Tioga Publishing Co., 1982. Card, S. K., Moran, T. P., & Newell, A. The psychology of human- computer interaction Hillsdale, N.J.: Erlbaum, (1983). Douglas, S. A. Learning to text edit: Semantics in procedural skill acquisifion. Ph.D. dissertation, Stanford University, 1983. Gentner, D. The structure of analogical models in science. In D. Gentner and A. S. Stevens (Eds.) h!entaZ models, Hillsdale, N.J.: Erlbaum, 1983. Halasz, F., & Moran, T. P. Analogy considered harmful. Proceedings of the Human Facrors in Computer Systems Conference, Gaithersburg, MD. March 1517, 1982. Roberts, T. R., & Moran, T. P. Learning and reasoning by analogy. Communications of the AChl, 1983, 25, 265-283. VanLehn, K., & Brown, J. S. Planning nets: A representation for formalizing analogies and semantic models of procedural skills. In R.E. Snow, P. Federico. & W.E. Montague (Eds.), Aprifude, learning, and inslruction, Vol. 1. Hillsdale, N.J.: Erlbaum, 1980. Winston, P. Learning and reasoning by analogy. Communications of the ACM, 1981, 23, 689-703. 103
1983
78
275
Three Dimensions of Design Development Neil M. Goldman USC/information Sciences institute 4676 Admiralty Way Marina del Rey, CA 90291 Abstract Formal specifications are difficult to understand for a number of reasons. When the developer of a large specification explains it to another person, he typically includes mformatlon in his explanation that is is not present, even Implicitly. in the specification itself. One useful form of information presents the specification In terms of an evolution from simpler specificattons. TypIcally a specification was actually produced by a series of evolutionary steps reflected in the explanation. This paper suggests three dimensions of evolution that can be used to structure specification developments: structural granularity, temporal granularity. and coverage. Their use in a particular example is demonstrated. 1. Int reduction When we describe system behaviors to other people outside the confines of a formal language. it is common to find an evolutronary vein in the description. The final behavtor can be viewed as an elaboration of some simpler behavior. itself the elaboration of a yet simpler behavior. etc.. back to some behavior deemed sufficiently simple to be comprehended from a non-evolutionary description. Formal specifications can likewise be described in an evolutionary vein. More importantly, the evolutionary steps can be characterized in terms of the kind of change they make to the specification. Three orthogonal dimensions of evolution have been Identified. l structural granularity -- deals with the amount of detail the specification reveals about each individual state of the process. * temporal granularity -- deals with the amount of change between successive states revealed by the specification. * coverage -- deals with the range permitted by a specification. of possible behaviors Development steps along these dimensions can be composed to produce desired changes to a specification. This gives some This research was supported by Defense Advanced Research Projects Agency contract MDA903-81 C-0335. Views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official opinion or policy of DARPA, the U.S. Government, or any other person or agency connected with them. hope that the steps themselves can be formalized -- i.e., that a language of change can be developed that permits a formal specification to be viewed and analyzed from its evolutionary perspective. 2. Specifications and Processes A “formal specification” is something which denotes, according to well-defined rules, a set of behavrors. This set will be termed the process denoted by the specification. A behavior IS a sequence of states. Each state comprises a set of oblects and a finite set of relations. A relation IS a set of n-tuples over the objects. A state models an instantaneous snapshot of a situation occurring during a particular execution of the process. A behavior then models the temporally ordered sequence of situations that constitutes a particular execution of the denoted process. 2.1. Initial Decisions In order to specify some desired or existing activity within this framework, it is necessary to make certain (tentative) decisions about what aspects of the activity are to be specified. These decisions need not encompass all aspects of the activity. The goal at this initial stage IS to specify enough of an abstraction of the activity to distinguish some of the relevant objects and actions involved. but not so much detail that the abstraction itself IS difficult to write or comprehend. Let us explore these considerations in the context of a particular example. a specification of the game of baseball. To structure our initial design. we consider three dimensions along which initial decisions must be made. One decision to be made concerns an tnittal structural granularity for the specification. This amounts to deciding what information about a particular situation is to be encoded in a state -- i.e., what objects and relations are to be represented. The jargon of any particular domain generally gives many possibilities. In baseball, we talk about concrete objects like players, coaches, managers, umpires, bases, balls, bats, as well as more abstract concepts like teams, positions, and scores. Since baseball is a game, a minimally interesting formalization will have to capture the notion of the participants and scoring. So let us make our initial goal be to encode in each state only the score of each of the two teams in the game. A second consideration concerns an initial temporal granularity for the specification. In this case, the jargon of baseball suggests several possibilities. One could take a snapshot after every pitch, or batter, or out, or half-inning, or inning, or just at the start and finish of the game. Our goal of keeping things simple suggests a 130 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. rule of thumb: start with the coarsest temporal granularity that suggests itself. In this case, that corresponds to making our initial goal be to let each state transition in a behavior cover a single inning of the game. The third consideration concerns the behavioral coverage of the specification. One might aim for all possible complete games of baseball, or might choose to include possible cancelled or suspended games as well. The simplicity goal suggests another rule of thumb: start with coverage of normal cases only. In this case, that corresponds to making our initial goal be to include in the denoted process only complete, ninesinning games. 2.2. Development Decisions One’s initial version of a specification (or program) is an approximation, often a very crude one. to what is desired or achievable [3]. Traditionally. one refines the denoted process by altering the specification -- inserting. deleting, and replacing textual units of specification. In recent years, researchers have argued that viewing the development of a specification as a flat sequence of textual changes hides structure that directs those changes. Waters [4] and Barstow [2] show that many seemingly complex alterations are implementation idioms of a given language. Wile [5] allows the alterations to be grouped within a hierarchical goal structure reflecting meaningful concerns of the implementer. Structured developments are easier to comprehend and, it is argued, raise the level of expression available to the designer in a way that leads to gains in productivity. This paper demonstrates that the same three dimensions of decision used to obtain an initial specification can be used to structure the development of that specification. A decision to specify greater detail about each state of a process is a refinement in the structural granularity dimension. Such a refinement leads to a new process and a many-to-l mapping from the behaviors of the new process to those of the old. A new behavior has states in 1 - 1 correspondence with the states of the corresponding old behavior, but containing more objects and/or relationships. Conversely, a decision to specify less (“hide”) detail about each state is an abstraction in the structural granularity. A decision to reveal more of the individual state transitions of a process is a refinement in the temporal granularity dimension. Such a refinement leads to a new process and a many-to-l mapping from the behaviors of the new process to those of the old. A new behavior contains its corresponding old behavior as a subsequence. Conversely, a decision to specify less (“hide”) detail about state transitions is an abstraction in the temporal granularity. A decision to add additional possible behaviors to a specification IS an expansion in the coverage dimension Such a refinement leads to a new process whose behaviors are a superset of those specified previously. Conversely. a decision to remove possible behaviors is a contraction in the coverage. It will become apparent that, even in the simple example used in this paper, meaningful (in domain terms) changes to a specification do not constitute changes along a single one of these dimensions. However, it is demonstrated that meaningful changes can be represented as sequences of changes each along a single dimension, and that these representations could be useful both as explanations of specification development and as the means of specification development. 3. Baseball: an example The initial specification of baseball appears in figure 3-1. Paraphrased. this speciftcation says: There are two teams, called home and vwtor. Each team has exactly one Score. which is 0 or more. Each team starts with a score of 0. A game consists of nine sequential innings. An inning is the net effect of each team batting once. The effect of a team batting is to leave the score unchanged or to increment the team’s Score by some (integral) amount. In this specification. the particular above have been made explicit. granularity decisions discussed The notation used here to represent formal specifications is called Gist [I]. Understanding this notation in detail is not important. The point of the development that follows is that the textual changes to this notation effect. but do not clearly convey to a person, the nature of the change being made to the denoted process -- in this case, baseball. The changes to the process are described informally in terms of the three dimensions discussed above. This provides an alternative, and, it is claimed, preferable view of the development. There are several directions in which one might choose to elaborate this definition of baseball. In particular, we might choose either to provide a more detailed definition of those baseball games which are included in the collection just defined, or we might try to define a more accurate set at the same level of detail. Let us start in the latter direction. Behaviors from STATE ==> visitor:Score = home:Score = 0 ACTIVITY ==> Pl ayBall( ) agent TEAM( Score j non-negative-integer) definition {home, visitor} where action Bat() definition self :Score :+= a non-negativeinteger ; action PlayBall () definition 9 times do PlayInning ; action PlayInning () definition begin atomic Bat0 by visitor; Bat() by home end atomic Figure 3- 1: Initial Specification -- g-inning game 3.1. Rule Out Tie Games One thing we want to convey is that baseball games do not end in a tie. In development terms, this is a contraction in coverage _. we wish to restrict our specification to a subset of the currently specified behaviors. In Gist, contraction is often accomplished by adding a constraint. In this case. a postcondition on the overall activity will suffice (see figure 3-2). This might be paraphrased by: The game must end with the two teams having different scores. Behaviors from STATE ==> visitor:Score = home:Score = 0 ACTIVITY ==> Pl ayBall( ) postcondition visitor:Score f home:Score Figure 3-2: Rule Out Tie Games 3.2. Extra Inning Games We now have a more accurate, though not yet “correct”, specification of nine-inning games. As a next step we could further refine this set, or, what seems more natural, introduce the concept of extra-inning games. This is a coverage expansion step, which might be paraphrased by: Actually, a game consists of at least nine innings, but will continue after that until an inning terminates with the score not tied. One way to achieve this in Gist is with the change depicted in figure 3-3. action PlayBall () definition until NormalTermination() do PlayInning( ): relation NormalTermination() definition Inning(*) 2 9 and visitor:Score f home:Score; relation Inning( Ilnon-negative-integer) definition I = count start PlayInning() Here, then, is a case of subgoaling in the development. The primary goal is to contract the specification to a subset of behaviors. The means for achieving this requires refining the temporal granularity of the description. As a first step, we make an inning a sequential event (see figure 3-4), paraphrased as: Actually, an inning consists of fmt having the visiting team bat, and then having the home team bat. action PlayInning () definition begin sequential Bat0 by visitor; Bat0 by home end sequential Figure 3-4: Half Inning Granularity Following this refinement, each behavior has two state transitions per inning rather than one. The key effect of this refinement IS to provide half-inning updates of the score, making it simple to state the condition under which the home team does not bat, the key to achieving the needed contraction of the behavior. This can be accomplished in the specification text by conditionalizing the event (see figure 3-5), paraphrased as: The home team’s turn at bat is skipped in the ninth inning if it is ahead. action PlayInning () definition begin sequential Bat0 by visitor: if Inning(S) and home: Score > visitor:Score then null else Bat() by home end sequential Figure 3-3: Extra Inning Games Figure 3-5: Skip Home Ninth 3.3. One Team Bats at a Time The behaviors defined now include all “normal” (barring war, riot, and natural disasters) complete baseball games but still constitute too large a set. Among games that must be excluded are those in which the home team scores in the ninth inning when it had enough runs to win after eight innings. For example, a home team cannot lead 2-l after the eighth inning and end up winning the game by a 5-l score. The desired coverage contraction on the currently defined process could be accomplished by strengthening the termination condition. An English paraphrase of the condition would be: . . . and, if the home team wins, then either (a) it scored no runs in the ninth inning. or (b) its score at the start of the ninth inning was no greater than the visiting team’s score at the end of the ninth inning. It seems absurd that anyone would choose to refine the description this way. The reason is that this is not a rule (axiom) of baseball as most people understand the game; rather it is a property (theorem) that follows from rules that are far more simply stated. To state these rules, however, requires a major refinement in the description; it will no longer suffice to think of an inning as an atomic event. 3.4. One Player Bats at a Time Alas, the revised description still contains too many games. The problem is that, when the home team bats in the ninth inning or thereafter, it is not allowed to score an arbitrary number of runs, as it is in a normal inning. But now we are up against a problem we have just faced. The desired subset of behaviors is difficult to describe at the half-inning granularity; we would have to say something akin to: In the ninth inning and following innings, the home team may not score more runs than required to give it a margin of victory of four. As before, this property sounds absurd as stated. It is more naturally thought of as a theorem following from rules stated more simply at a finer temporal granularity. Half-innings are not atomic events to someone who understands baseball. but are composed of an iteration of smaller events. In each of these smaller events, the batting team’s score is incremented by at most four. Within this finer temporal granularity, the desired behavior subset is achieved by adding a simple mandatory termination exception. depicted in figure 3-6, to the home team’s Bat event: 132 If the inning is 9 or greater, and the home team is ahead, the half-inning (and thus the game) terminates. action PlayInning () definition begin sequentjal Bat0 by visitor: if Inning(S) and home : Score > visitor:Score then null else Bat( ) by home exceptions (Inning(*) 2 9 and home:Score > visitor:Score) ==> null end sequential action Bat() definition repeatedly Play() action Play() definition self :Score :+= choose{0,1,2.3.4} Figure 3-6: Terminate Game When Home Team Has Won The “smaller events” used to achieve a sufficiently fine temporal granularity within a half-inning do not correspond to any unit of activity suggested by the jargon of baseball. They are an unnatural abstraction that serve the purpose of having the score incremented in suitable units. In fact, the next finer unit of activity suggested by baseball is the “player’sturn-at+bat”. To suitably model this event, it is necessary to introduce a structural refinement to the specification. That is, to avoid introducing odd- sounding theorems as axioms, we need more detail about a state than just the score. We need to introduce the concept of “men on-base” . There are a number of ways one might state the effect of an at-bat. For example: At the start of a half-inning, there are zero men on- base and zero outs. There may never be more than three men on-base at once. The effect of a single player’s at-bat is: Let M be the number of men on-base. The team’s score and the number of outs are each increased by between 0 and 1 + M. The number of men-on-base is incremented by between + 1 and -M. The number of outs may not exceed three. When the number of outs reaches three. the half-inning terminates. At the end of each player’s at-bat, the following invariant must hold: The number of players having batted in the half- inning equals the sum of the net increment to the batting team’s score, the current number of men on- base, and the current number of outs. It can now be seen that the seemingly arbitrary four runs maximum per scoring play follows from the (equally arbitrary) three men on- base maximum and 1 + M score increment limit. These in turn follow from factors that could, but won’t, be defined by going to finer levels of structural and temporal granularity. 4. Conclusions We have identified three dimensions along which process specifications change as they are elaborated. The dimension of structural granularity determines the level of detail the specification reveals about each state of the process. The dimension of temporal granularity determines the time slices at which the specification models the process. If both temporal and structural granularity are fixed, a specification may be changed SO as to expand or contract the coverage of behaviors. It appears that the best way to achieve a change along one dimension may involve making a change along one of the other dimensions. A view of a specification in terms of its structured development differs significantly from a view of a specification that expresses only the net result of that development. In the example presented in section 3, each development step was formally expressed in terms of a change to the textual representation of the specification. But, as is almost always the case, there are numerous textual changes that would have the same semantic effect. The intent of the development step, however, is not to describe a change to the specification per se, but to change the process denoted by the specification. Although we have not yet done so, it seems plausible that the development steps stated in English in this paper could be formalized directly in terms of functions mapping processes to processes rather than as mappings from specifications to specifications. In that case, the initial specification and sequence of structured modifications would present a complete definition of the final process and would arguably be both easier to produce and easier (for a person) to comprehend. Acknowledgements I wish to thank Robert Balzer. Don Cohen. Martin Feather. Jack Mostow, Bill Swartout and Dave Wile for many discussions on the understandability (or lack thereof) of software and specifications and for comments on this paper. Ref e rences 1. Balzer. R., Goldman, N. & Wile. D. Operational specification as the basis for rapid prototyping. Proceedings of the Second Software Engineering Symposium: Workshop on Rapid Prototyping, ACM SIGSOFT, April, 1982. 2. Barstow, D.R.. “Knowledge-Based Program Construction”. Elsevier North-Holland. 1979. 3. Swartout, W. and Balzer, R. “On the inevitable Intertwining of Specification and Implementation.” Communications of the ACM 25, 7 (July 1982), 438:440. 4. Waters, R. C. “The Programmer’s Apprentice: Knowledge Based Program Editing.” IEEE Transactions on Software Engineering SE-B, 1 (1982), l-12. 5. Wile. D. S. Program Developments: Formal Explanations Implementations. Tech. Rept. RR-82-99, ISI, August, 1982. of 133
1983
79
276
TALIB: An 16 Layout Design Assistant Jin Kim and John McDermott Departments of Electrical Engineering and Computer Science Carnegie-Mellon University Pittsburgh, Pa. 15213 Abstract This paper describes a knowledge-based system for automatically synthesizing integrated circuit layouts for NMOS cells. The desired cell layouts are specified in terms of their general structural and functional characteristics. From these initial specifications, the system produces correct and CoriJpact cell layouts. The system performs this task by generating plan steps at different levels of abstraction and opportunistically refining each plan step at one level to more specific steps at a lower level. Although the implementation of this system has focused on NMOS technology, the techniques used are not restricted to that technology.’ 1. Introduction In recent years there has been a growing interest in developing systems that automatically synthesize integrated circuit layouts from high-level functional descriptions. The motivation for developing automatic design synthesis systems is three-fold: reduction of design costs through elimination of design errors, efficient exploration of the design space, and reduction in design time by allowing designers to work at a higher level of abstraction. This paper describes a system, called TALIB, that assists with the part of the IC synthesis task known as cell layout. The program is implemented in OPS5, a general-purpose production system language [Forgy 771 [Forgy 811. TALIB accepts a description, in netiist form, of the schema.tic of the NMOS digital circuit to be laid out as input and produces the description of the mask geometry as output. A typical integrated circuit is implemented on a silicon wafer by creating, through a variety of fabrication techniques, layers of different substances in geometric patterns on the wafer surface. The superimposed layers of conducting, semiconducting, and insulating materials define the electronic components by interacting through different layers. The task of laiiny out an integrated circuit involves determining what patterns on each layer will create the desired components and Interconnections. For example. in the NMOS technology that TALIB uses, a transistor is created whenever a region on the polysilicon layer overlaps a region on the diffusion layer. The goal of the layout task is to put as much circuitry as possible in as small an area as possible. Of course the resulting circuitry must work and satisfy the boundary constraints on ihe layout. 1 This research was funded through a fellowship from the Xerox Corporation and through National Science Foundation grant ECS-8207709. The views and conclusions contained in this document are those of the authors and should not bc interpreted as representing the official policies, either expressed or implled, of the Xerox Corporation or the National Science Foundation. In order to manage the complexity of large chip design problems, it has proven useful to take advantage of the hierarchical structure inherent in most digital circuit designs [Mead 791. By utilizing the hierarchical structure, the complexity of the chip design problem can be managed by partitioning the original problem into less complex subproblems at lower levels in the design hierarchy. For example, a chip floor plan that partitions the layout into functional areas and constituent cells can be prepared at the top level. With the chip plan as a guide, each functional area can then be constructed from cells and devices in a hierarchical manner. The functional areas are subsequently merged and fine adjustments made to both the cells and the chip floor plan until the cells are bound together tightly. I .- - - - - - - A 17 Figure 1 - 1: NMOS Digital Circuit Schematic There are numerous problems associated with automating the hierarchical layout design methodology. The focus of this paper IS on the problem of laying out the cells. The cell layout problem is formulated in terms of a schematic of the circuit to be laid out and the boundary constraints on the layout of the cell. For NMOS chip dcslgn, the circuit schematic typically 197 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. contains a mix of logic gates and transistors as shown in Figure l-1; the boundary constraints describe the size of the cell, its shape, the aspect ratio, and the ordering of i/o signals along the cell sides. The boundary constraints reflect the characteristics which the cell’s layout must have in order to fit into the overall chip layout. Solving the cell layout problem involves generating geometric layout patterns for the circuit schematic which satisfy the boundary constraints as shown in Figure l-2. Figure I-2: NMOS Layout 2. TALIB The task of layout generation is that of placing box-like modules on a plane bounded on the X and Y axes, and connecting them by a set of wires according to a given set of rules. In the process of planning a layout, the designer must generate intermediate steps based on expectations. For example, in placing a subcircuit on a layout surface, the designer must make assumptions about how it wiil be laid out in order to analyze its impact on such global parameters as usage of metal channels and total area; however, the task of actual layout of the subcircuit cannot be started until its placement relative to other subcircuits is known. This facet of tentative reasoning introduces uncertainty into the design environment. Since such uncertainty is inevitable in bin- packing problems, TALlB is implemented as a planning system. In addition to handling uncertainties in the design environment, plans are useful in distinguishing between important considerations and details. By utilizing knowledge at higher levels of abstraction, TALlB can eliminate, at an early stage, those search paths that are not very promising [Sacerdoti 771. Further, by using plans to control focus, TALlB can introduce global information into the layout generation process. Without the plan structure, TALIB would tend to search in a depth first manner by relying only on local information. Although TALlL3 develops and refines plans around objects at various levels of abstraction, most of its knowledge is based on empirical rules that human experts utilize. Because TALIB lacks any deep understanding of the task domain, it cannot explain why its rules are valid. 2.1. Input Description The input to TALlB consists of two parts. The first part describes the circuit components and their interconnections. A circuit is made up of devices and signal nodes. A device is any of the components that are normally present in a digital circuit such as NOT-gates, NOR-gates, and transistors. Each device has two or more terminals by which it is connected to the rest of the circuit. Two device terminals are connected to each other through a common signal node. The transistor sizes in the circuit schematic can be specified explicitly or default values can be generated by TALIB. The second part of the input describes the topological and geometric requirements around the outside boundary of the circuit layout. The topological information includes the order in which external connections must appear on the cell boundary. This requirement can vary in detail from a specification of just the side of the cell on which a signal is to be made available, to a specification of the exact ordering which the signal node must have relative to other signal nodes on that side. The description of each i/o signal can also include the layer on which the signal must be implemented. The geometric information includes the specification of the physical parameters of the cell layout; this can be stated in terms of the upper bounds on the dimension of the cell layout or in terms of the cell’s aspect ratio. The combination of the topological and geometric descriptions of the cell layout make up the boundary constraints of the cell layout problem. The output of TALlB is the description of the layout geometry in CalTech intermediate Form (CIF) [Mead 791. 2.2. Layout Models TALIB is capable of limited spatial reasoning. In the degenerate case when very little knowledge can be brought to bear on a particular region of the layout, TALlB expands the relevant transistors and their interconnections in terms of geometric primitives (e.g. rectangles) within the locality. Inside the expanded region, TALlB relies on a modified form of the Lee expansion routing algorithm [Lee 611 to continue the layout process. However, the search space associated with even a small circuit can be very large when it is handled at the geometry level. If all the details of the search space were attended to at this level, the combinatorics would enable TALlB to solve only rather simple problems. Therefore, TALlB represents the state of the layout in terms of constructs that deal with subcircuits rather than rectangles. By working with abstractions of the layout and dealing with geometric primitives only when necessary, TALlB is able to solve non- trivial problems efficiently. The design constructs used by TALIB are built around subcircuits that are commonly found in digital NMOS circuits. These subcircuits are composed of two to six transistors, have a small number of satisfactory layout?, and are usually used many times in a design. For each subcircuit, TALIB creates a collection of objects to represent the characteristics of the subcircuit instances at different levels of abstraction. For example, a subcircuit is represented as a single object with its estimated layout area as one of its attributes at one level of abstraction, as a set of partially ordered objects that each represent a signal terminal of the subcircuit at a lower level of abstraction, and as a collection of geometric primitives at a still lower level. Associated with each of the objects describing a subcircuit is a set of rules for updating the attributes of the object. In addition to the subcircuit based constructs, TALlB maintains a map of the layout surface in terms of unused areas. Initially, the partitioning of the layout surface corresponds to the number of subcircuit instances in the design. As the design task proceeds, additional unused areas will be generated to reflect opportunities for compacting the layout. The adjacency relationship between unused areas reflects a particular topological style TALIB selects to carry out the layout. The topological style selected depends on the initial boundary constraints and on the complexity of the circuit. The task of placing components corresponds to that of mapping subcircuits or geometric primitives to one of the unused areas. One exarnple of a subcircuit-based design construct is the circuit-layout model. For each circuit-layout model, there are a set of rules that fill in and update the attribute fields of the model. The attribute fields include the estimated X and Y dirnensions of the layout, the geometric components required to realize the layout, the spatial relations among the geometric compon&ts, and the orientation of the components in the model. Some of the attributes of a circuit-layout model for an inverter are shown below. lnverter {comment: this is a circuit-layout model for an inverter} Name Status X-dim Y-dim Input-l -type Input-l -signal Pullup-type Pullup-length Pullup-width Pullup-orientation Pointer-to-pullup Pullup-pulldown-y-spacing In addition to the above objects for describing the state of the design locally, there are global descriptors that reflect the general state of the layout. These include objects that specify the estimated space for metal channels under the given set of boundary constraints, the number of unused channels, the maxirnurn X and Y bounds, and the amount of space remaining in the X and Y dimension. 2.3. Knowledge Organization TALlR performs the layout design task by utilizing a large knowledge base of design heuristics encoded as rules. The rules ernbodying the task specific knowledge can be roughly classified into two categories. The bulk of TALlB’s rules are organized around the concept of subtasks; this type of organization has proven to be convenient for capturing knowledge that has a relatively narrow scope of applicability [McDermott 821. The rest of TALlB’s rules are demons that select subtasks, detect their completion, and detect constraint violations. Demons have been useful in other task environments requiring the system to react quickly to changing situations [Haley 831. The subtasks known to TALlB roughly correspond to the steps in the plan developed during the design process. The subtasks are invoked in a hierarchical fashion with those handling the more abstract goals being expanded in terms of the more specific, lower level, subtasks. At the lowest level, the primitives in the subtask hierarchy deal with generating the layouts for commonly occuring subcircuits and with removing unused space between adjacent subcircuit layouts. At a higher level of abstraction, subtasks deal with placement of subcircuits relative to one another and with exploring promising areas on the layout surface. Most of the subtasks are familiar to human experts, but a few, like those dealing with backtracking, are almost never mentioned. The version of TALIB reported in this paper knows how to perform about 100 subtasks. TALlB’s other rules, the demons, allow global knowledge to be brought to bear whenever it is relevant. These rules have four different functions: d) Classifying design situations. A typical rule with this function generates adjacency relationships among subcircuits based on the functional nature of each subcircuit and on the interconnection characteristics of the subcircuits. o Creating, intantiating, and updating plan steps. A typical rule with this function sets up precedence requirements among plan steps. e Propagating constraints from one subproblem to another. A typical rule with this function propagates distance relations between signal terminals of one subcircuit and those of another. e Detecting the completion of tasks and constraint violation in the design-state. A typical rule with this function detects the violation of a fabrication process spacing rule for geometric primitives. 2 A suitable layout boundary constraints subcircuit will vary with the particular 199 Most of the knowledge at lower levels of the subtask hierarchy is reliable (i.e. if the current state is on a solution path, the state that results from applying such a rule is almost certain to be on the solution path). The bulk of the knowledge at higher levels, however, can only be applied with limited confidence that the result will lead to a solution (e.g. knowledge of how to cluster and place subcircuits). As a consequence, TALIB has to be able to backtrack. 2.4. Control Cycle The reasoning process is quite basic. with the selection of subtasks being guided by a set of domain-specific control rules. The system reasons forward from the known facts and its knowledge about different layout models to develop a plan structure; as soon as a plan, reflecting the global interaction among subproblems, is developed at one level, the plan steps are expanded in terms of those at a lower level of abstraction in an opportunistic fashion [Hayes-Roth 791. Since the subtask associated with a particular plan step is instantiated by TALlB whenever sufficient information is locally available, TALIB occasionally develops isolated planning islands that are not on the solution path. This type of thrashing is caused by premature introduction of default information and is minimized through the use of domain-specific control rules. The planning process continues until a solution or a violation of a boundary constraint is detected. If the later occurs, the system backtracks to undo some design decision and attempts an alternate approach; this involves maintaining and analyzing dependency records to trace inconsistencies back to the appropriate inferential steps [Doyle 791. TALlB’s major design activities are: o Selecting design constructs on the basis of boundary constraints. o Partitioning the layout surface into zones and, based on a particular layout style, characterizing the spatial relationships among the zones. o Partitioning the circuit netlist in terms of known topological groupings. e Placing topological groupings’ of subcircuits in different zones of the layout surface. u Refining design decisions within a particular locality and propagating design decisions to other parts of the partial design. Although, there is a partial time ordering among certain parts of the above design activities, it is not possible to generate an a priori sequencing plan for all the design activities. This is because rnost of the activities listed above are dependent on specific design situations. 2.5. Status of TALIB In large chip designs, the bulk of the area savings are due to the simplified global routing between cells rather than as a result of particularly space efficient cell layouts. Based on this observation, we elected to trade off reduction in design time (less search) against some inefficiency in cell layout space as a goal in developing TALIB. The version of TALlB reported in this paper reflects this goal by basing the bulk of its design activity around concepts. the subcircuit clusters, that are at a higher level of abstraction than the geometric primitives. As a result, though rALlB typically generates topological plans that are as good as those produced by human experts, the final area of the ceil layouts IS 10 to 35 percent greater than those of human designers. Subsequent versions of TAI-lB will improve upon this flgure,3 but we do not expect TALIB to ever produce cell layouts that are consistantly superior to those produced by hurnan experts. The current version of TALlB consists of about 1200 rules with about 940 of these associated with specific subtasks. It has been used to generate a variety of cell layouts for each of a dozen circuits. The most complex of these circuits consists of 36 transistors with an ordering specified for the signals at the cell boundary. The circuit shown in Figure 1-1 is part of a multiplier slice and represents one of the sirnpler circuits laid out by TALIB. Since the boundary constraints on the circuit in Figure l- 1 did not severely restrict the size of the layout, TALlB was able to produce the layout shown in Figure l-2 with fewer than 2000 rule firings. Figure 2-1 shows sorne of the subcircuit clusters generated by TALIB during its planning process. We are currently refmlng TALlB’s rule base and its user interface before releasing it for evaluation to a small community of lC designers at CMU. Eventually, TALlB is intended to be part of CMUDA, a hierarchical desibn automation system for generating complete IC chips from behavioral level specifications [Parker 791 [Joobbani 821. Figure 2- 1: Subciruit Groupings 3 Because a great deal of the domnm knowledge involved in chip design is closely coupled to the mdtvldual IC fnbricatlon IIWS, this knowledge tends to be very dynamic. In order to make the knowledge acqusillon task for the initial version of TALKS reasonably tractable, the layout domain was constrained to follow the Lambda-based design rules reported in [Mead 791. Although usmg these rules allows for effective decouplmg from any one particular fabncntmn process. it does so only through the use of a more conservative design rule set Ihat generally results in larger layout areas. 200 3. Conclusions A knowledge-based system for automatically synthesizing cell layouts has been described in the context of a hierarchical chip design environment. The cell layout task contains a number of NP-hard problems. Unless the layout is based on a tightly constrained topological style, such as PLAs or gate- matrices, the number of alternatives that must be explored before a candidate solution can be declared acceptable is ordinarily very large. Based on the observation that efficient chip layouts can be produced without locally optimizing the layouts of the constituent cells, a system for planning design steps around abstract layout concepts has been developed. TALIB has two basic strategies which help it limit the amount of search required: (1) It generates plans and then refines them at several levels of abstraction; such planning allows it to identify and tackle the least tractable problems first. (2) Its plans are generated on the basis of a variety of features of the circuit to be laid out; TALIB has a significant amount of knowledge which enables it both to quickly generate an abstract plan that is likely to be appropriate and to refine that plan to some extent before exploring alternatives. Acknowledgements We wish to acknowledge Dan Siewiorek at Carnegie-Mellon University for the inspiration, ideas, and advice he has provided. Also, we would like to thank Rick Barth and Don Scharfetter of Xerox PARC and Art Krahmer, Bill Mills, Ed Snow, Doran Wilde, and Randy Young of Intel Corporation for their help in developing TALIB’s knowledge-base. References [Doyle 791 J.Doyle. A truth maintenance system. Artificial Intelligence 12~~231-272, 1979. [Forgy 771 C.Forgy and J.McDermott. OPS, A Domain Independent Production System Language. In 5th Joint Conf. Artificial Intelligence. ACM, 1977. Forw 811 C.L.Forgy. OPS5 User’s Manual. Technical Report CMU-CS-81-135, Carnegie-Mellon University, 1981. [Haley 831 P.Haley, J.Kowalski, J.McDermott, R.McWhorter. PTRANS: A rule-based management assistant. Technical Report in preparation, Carnegie- Mellon University, 1983. [Hayes-Roth 791 B. Hayes-Roth and F. Hayes-Roth. A Cognitive Model of Planning. Cognitive Science 3:pp275-310, 1979. [Joobbani 821 R.Joobbani. Know!edge-Based Chip Planning System. PhD Thesis Proposal, Carnegie-Mellon University, 1982. [Lee 611 C.Y.Lee. An algorithm for path connections and its application. IRE Trans. Electron. Comput. :pp346-365, September, 1961. [McDermott 821 J.McDermott. Rl: A Rule-Based Configurer of Computer Systems. Artificial intelligence vl9(9):pp39-88, September, 1982. [Mead 791 C.Mead and L.Conway. Introduction to VLSI Systems. Addison-Wesley, 1979. [Parker 791 A.Parker, D.Thomas, DSiewiorek, M.Barbacci. L.Hafer, G.Leive, and J.Kim. CMU Design Autornation System: An Example of Automated Data Path Design. In 7Gth Design Automation Conf.. IEEE, June, 1979. [Sacerdoti 771 E.D.Sacerdoti. A Structure for Plans and Behavior. Elsevier, New York, 1977. 201
1983
8
277
STRATEGIST: A Program that Models Strategy-Driven and Content-Driven Inference Behavior ABSTRACT Richard H. Granger Kurt P. Eiselt Jennifer K. Holbrook Artificial Intelligence Project Computer Science Department University of California Irvine, California 92717 1.0 Introduction Artificial Intelligence models of human understanding have implicitly assumed a single strategy of inference behavior. The integrated understander's strategy usually goes: 1. While reading a sentence, make as many inferences as possible. 2. Connect inferences from the two sentences. However, we have observed that not all readers make interpretations of text which conform to this strategy. For example, subjects in our recent experiments (Granger & Holbrook, 1983) read the following story: [ll Nancy went was depressed. to see a romantic movie. She Our experiments show that different individuals who read this story had two significantly different interpretations: (1) Nancy went to This research was supported in part by the National Science Foundation under grant IST-81-20685 and by the Naval Ocean Systems Center under contract N00123-81-C-1078. the movie to be entertained, and the movie depressed her (perhaps because it was romantic); vs. (2) something before the movie depressed Nancy, and she went to the movie to cheer up. Our experiments have indicated that at least two different inference strategies for interpreting text exist. However, these strategies are so closely related that, most of the time, readers using different strategies will come up with the same interpretation of the events related in the text. We theorize that the same component inference processes which comprise each inference strategy are available to all readers. The difference in the strategies lies in the different rules used to apply the component processes. This paper presents these theorized processes and rules in a prototype model, called STRATEGIST, which exhibits the observed behavior of human readers. Several researchers (e.g. Schank & Abelson, 1977) have hypothesized inference processes which allow the reader to interpret text. Psychological experiments such as those conducted by Graesser (19811, Rumelhart (19811, and Seifert, Robertson, and Black (1982) have determined when in the understanding process various types of inferences are made. However, these experiments were not designed to study the differences in processes which our experiments have discovered. The results of many of these studies can be reinterpreted in light of our results. The programs which were written to emulate human inference behavior (e.g. Wilensky, 1978; Granger, 1980; Wilensky, 1983) have also failed to model this particular aspect of inference decisions. 2.0 Background Many story understanding systems have been written which can easily interpret simple text. Recall the example story: [ll Nancy went to see a romantic movie. She was depressed. PAM (Wilensky, 1978) would interpret this story by assuming that Nancy has the goal of entertaining herself. Going to the movie is 139 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Nancy's plan for satisfying thi s goal. PAM would try to fit Nancy's depres sed state into the plan-that was executed,-probably by inferring that the movie was depressing, or that the movie was not entertaining, and that she was depressed because she could not fulfill her goal. BORIS (Dyer, 1980) would come up with the same interpretation for the same reasons, even though it employs more complex eva luation metrics: Ml. All events should be causally knowledge structures. MACARTHUR (Granger, 1981) would be able to come up with both interpretations, but would always generate the same initial interpretation as the other systems. These systems have all worked from a basic set of premises which include two types of rules. Content-driven rules are rules which generate inferences on the basis of the understander's specific knowledge of the situation described in the text. Strategy-driven rules are rules which generate inferences or suppress content-driven inferences using extra-textual considerations. In other words, the strategy-driven inferences themselves define the specific context of the situation described in the text. Humans understand stories using an Inference Manager which applies the content-driven rules, as well as the strategy-driven rules specific to their behavior. The kinds of inferences generated by both types of rules include explanatory inferences. These inferences explain why the stated events occurred. In other words. explanatory inferences are adding to the context. (For example, goals can be explanatory inferences with respect to intentional actions.) If explanHtory inferences add enough context, they can give rise to predictive inferences, expectations about the events which will occur in the text. Plans are examples of predictions from goals. (Predictive inferences always 'look ahead' to account for some new input in the text; reciprocally, postdictive inferences are those plan inferences that look backward from an explanatory goal inference to account for previous events in the text.) used The following set of rules is content- by all readers to understand text: Cl. As a sentence is parsed, try to fit new input/conceptualizations into existing context. c2. If inferences conflict with specific statements in the text, the specific statements rule out the inferences, which are supplanted by interpretations which do not conflict with the specific statements. All readers verify understanding by satisfying evaluation metrics, which the Inference Manager applies to the interpretation. There are at least two such -driven, and related to each other (Cohesion). M2. Make the least complex interpretation of events possible (Parsimony -- Granger, 1980). Our experiments have found that many subjects will indeed interpret story [II as the systems described earlier do. We call people who come up with this interpretation Perseverers. Our data indicate that Perseverers will make inferences as soon as possible when reading text. These early inferences are the context in which further events are interpreted. Such readers persevere with an inference until a contradictory event or concept forces a change of interpretation, These are the Perseverer's strategy-driven rules: PSl. If there is no previous context, make default inferences. PS2. Inferences should always be as specific as possible. (Wilensky, 1983). The Perseverer 's set of strategy-driven rules is used by all of the systems discussed above. Applying these rules, this is how we hypothesize that a Perseverer would go about interpreting story [ll: INPUT: Nancy went romantic movie. to see a Application of Cl: This is the first sentence, so there is no previous context to constrain inferences. Several low-level inferences, basically an unstated part of the sentence, are made (e.g. Nancy not only went to the movie theater, but saw the movie as well). Application of PSl and PS2: These default inferences which are made are as specific as possible. The most important for our purposes is the explanatory inference that generates a goal to explain why Nancy went to the movie. This default goal is that Nancy wanted to be entertained by the movie. This explanatory inference gives rise to several predictive inferences, among them, the expectation that Nancy was happier after she saw the movie. INPUT: She was depressed. Application of PSl: Because there is previous context, Cl applies. The reader tries to fit this new sentence into the existing 140 context which the predictive inferences set up (Nancy was happier after she saw the movie). Application of C2: The predictive inference that Nancy was happier must be supplanted with the specific knowledge that Nancy was depressed. The explanatory inference need not be supplanted, but the reader must realize that Nancy's goal of happiness was not fulfilled. Application of Ml and M2: The most parsimonious explanation of the story is that Nancy's goal of happiness was not fulfilled because her goal of entertainment was not fulfilled. This explanation is also cohesive. Note that Ml is constantly being applied to the two sentences, as connections are searched for. Notice also that there are other interpretations which do not assume that the goal of entertainment wasn't fulfilled -- for example, Nancy may have enjoyed the movie so much that she was depressed because it ended. However, this is not the most parsimonious interpretation. There is a different initial interpretation which subjects in our experiments made, which is as plausible as the interpretation made by the Perseverers and all of the systems mentioned above. This interpretation is that Nancy was depressed before she saw the movie, and went to the movie to cheer up. We call people who make this interpretation Recencies. Recencies are readers who delay making inferences until enough information is present. A basic rule which drives this strategy is: when more text is available, and the text is ambiguous, leave a loose end (Granger, -v 1980), because later text will explain earlier events. The most recent inference will then become the context in which the earlier text is interpreted. To arrive at this alternate interpretation, a Recency must have a different set of strategy-driven rules from that of the Perseverer, which the Inference Manager applies: RSl. If there is no active context, only low-level goals are to be inferred. RS2. If there is no more text, inferences should be as specific as possible. RS3. If there is more text, leave a loose end. Applying these rules, hypothesize that a Recency ill: this is how we would process story INPUT: Nancy went to see a romantic movie. Application of Cl: This is the first sentence, so there is no context to direct inferencing. Application of RSl and RS3: Only low-level inferences are made (e.g. Nancy saw the movie). There is more text, so inferences are left as unspecified as possible, and loose ends are left rather than generating explanatory and predictive inferences. INPUT: She was depressed. APP~ ication of Cl: The only existing context is low-level. Application of RS2: There is no more text, so specific explanatory and predictive inferences must be made from the present concept. The most important explanatory inference for our purposes is that Nancy has a goal of alleviating her depression, and the predictive inference which follows is that Nancy will do something to alleviate her depression. Application of C2: The causal relation is ambiguous, so the goal-based predictive inference (that Nancy will do something to alleviate her depression) is maintained, and a search is conducted for something in the text which can serve as this plan. Going to the movie fulfills this predictive inference, and so the final interpretation of the text is that Nancy went to the movie to cheer up* Application of Ml and M2: The events are related to each other (Causal Cohesion satisfied). The fewest number of inferences were used to relate the events to one another (Parsimony satisfied). (Although the explanation of rule application appears to have applied the rules in a particular order, we do not have any theories about rule ordering. The rules are probably applied as they become appropriate, but there is not necessarily a linear order for application.) 141 Recency understanding steps These two different strategies may seem to describe different sets of component processes altogether. This is deceptive; the processes are strikingly similar. The evaluation metrics which both strategies use are the same; Causal Cohesion must be satisfied for both strategies, although the cause/effect chains are different. Furthermore, both interpretations are parsimonious. There is no evidence that either strategy cannot make particular types of inferences. Both strategies make necessary inferences, such as connecting referents, inferring that Nancy went inside the theater, bought the ticket, and saw the movie (see Seifert, et. al, 1982, for discussion). We theorize that both strategies also make use of the same knowledge representations. Both strategies generate explanatory inferences and predictive inferences. With both interpretations, the inferences that are made affect the interpretation of other events. 1. Leave loose end from first sentence 2. Explanatory inference of goal from 2nd sentence (alleviate depression) 3. PO stdictive inference al leviate-depression) from goal (plan for 4. Successful search for connection plan and event (see movie) between Text 1B (Backwards): Nancy was depressed. She went to see a romantic movie. Perseverer understanding steps It is the strategy-driven mechanism that drives the ongoing decision to either apply or suppress particular content-based inferences during understanding. Thus, on a given text, the same (potential) content-based inferences will be available regardless of strategy, but, depending on the strategy used, some of those "available" inferences will be generated while others will not. STRATEGIST's Inference Manager can apply the text-interpretation rules of either strategy, and so can derive either interpretation of an ambiguous text. 1. Explanatory depression) inference of goal (alleviate 2. Predictive inferences alleviate-depression) from goal (plan for 3. Successful search for connection plan and event (see movie) between Recency understanding steps Following is a brief summary of the steps Perseverers and Recencies take during the processing of both the Nancy text and its reverse: 1. Expl anatory inference of goal from sent ence (be entertained 1 2nd 2. Postdictive inferences from goal (see-movie plan will succeed in satisfying entertainment goal) Text 1F (Forwards): Nancy went to see a romantic movie. She was depressed. 3. Unsuccessful search for connection between 'plan success' postdiction and 'depression' affect Perseverer understanding steps 1. Explanatory inf entertained) erence of goal (be 4. Successful search for connection between alternate 'plan failure' postdiction and 'depression' affect 2. Predictive inferences from goal (see-movie plan will succeed in satisfying entertainment goal) 3. Unsuccessful search for connection between 'plan success' postdiction and 'depression' affect The two crucial things to note are: (1) precisely the same content inferences are made in the same circumstances by both Recencies and Perseverers; the only difference is when they make them. It is only that difference that leads to the differences in eventual interpretation. (2) Perseverer behavior on text 1F and Recency behavior on text 1B are almost identical to each other; reciprocally, Perseverer behavior on text 1B and Recency behavior on text 1F are almost identical. 4. Successful search for connection between alternate 'plan failure' postdiction and 'depression' affect 142 3.0 Operation of the STRATEGIST prototype :attempting inference generation The following represents actual annotated run-time output of the STRATEGIST program. First we examine STRATEGIST's behavior as a Perseverer. The input to the program is the Conceptual Dependency representation (Schank & Abelson, 1977) of the following story: [2] Melissa began to cry. Ty just asked her to marry him. ler had :processing as perseverer :the story is: (EXPEL (ACTOR MELISSA) (OBJECT TEARS)) (MTRANS (ACTOR TYLER) (MOBJ PROPOSE-MARRIAGE) (TO ImIssA) (FROM TYLER)) :processing next concept: (EXPEL (ACTOR MELISSA) (OBJECT TEARS)) :attempting inference generation :inferring from: MELISSA :no context found :default inference selected: (A-HAPPINESS (PLANNER bimssA)) (DO-~ (ACTOR MELISSA)) (MENT-ST (ACTOR MELISSA) (VALUE Pas)) :inferring from: TEARS :no context found :default inference selected: (Do-x (ACTOR ?ACTOR~)) (MENT-ST (ACTOR MELISSA) (VALUE NEG)) (EXPEL (ACTOR MELISSA) (OBJECT TEARS)) STRATEGIST searches for existing inferences (i.e. the context which constrains the current inference generation) which might connect with the inferences to be generated. No existing, applicable context is found, so STRATEGIST searches for any inferences associated with TEARS in the context of an EXPEL. It finds a default inference that someone has done something which made Melissa unhappy and caused her to cry tears of sadness. (Note that if this inference later turns out to be incompatible with some subsequent inference, it may be supplanted [Granger, 19801 and an alternative inference used.) :end of inference generation :processing next concept: (MTRANS (ACTOR TYLER) (MOBJ PROPOSE-MARRIAGE) (To mLIssA) (FROM TYLER)) All inferencing has been completed for the first conceptualization, so STRATEGIST begins inference generation for the next conceptualization. The inferences generated from the first conceptualization provide the context in which the next conceptualization will be interpreted. :inferring from: TYLER :no context found :default inference selected: (A-HAPPINESS (PLANNER TYLER)) (DO-x (ACTOR TYLER)) (MENT-ST (ACTOR TYLER) (VALUE ~0s)) : inferring from: PROPOSE-MARRIAGE :context found :possible inferences are: (MTRANS (ACTOR TYLER) (MOBJ PROPOSE-MARRIAGE) (FROM TYLER) (TO MELISSA)) (MTRANS (ACTOR MELISSA) (MOBJ ACCEPT) (FROM mLIssA) (~0 TYLER)) (MENT-ST (ACTOR TYLER) (VALUE ~0s)) WENT-ST (ACTOR MELISSA) (VALUE ~0s)) (MTRANS (ACTOR TYLER) (MOBJ PROPOSE-MARRIAGE) (FROM TYLER) (~0 MELISSA)) @mumi (ACTOR MELISSA) (MOBJ REJECT) (FROM mux3A) (~0 TYLER)) (MENT-ST (ACTOR TYLER) (VALUE NEG)) MINT-ST (ACTOR MELISSA) (VALUE NEG)) STRATEGIST finds that an applicable context does exist for PROPOSE-MARRIAGE, so it looks at the possible predictive inferences for PROPOSE-MARRIAGE: that Tyler proposes, Melissa accepts, and both are happy, or that Tyler proposes, Melissa rejects his offer, and both are unhappy. :found matching inference :resulting merged inference is: Omms (ACTOR TYLER) (MOBJ PROPOSE-MARRIAGE) (FROM TYLER) (TO MELISSA)) Wrhws (ACTOR MELISSA) (MOBJ REJECT) (FROM MELISSA) (TO TYLER)) (MENT-ST (ACTOR TYLER) (VALUE NEG)) (MENT-ST (ACTOR MELISSA) (VALUE NEG)) (EXPEL (ACTOR MELISSA) (OBJECT TEARS)) The inference that Melissa rejected Tyler's proposal and she is unhappy connects with the previously made predictive inference that Melissa was crying because someone did something which made her unhappy. The new inference chain which results is then stored as a predictive inference which will be applied to future inference generation. :end of inference generation :end of processing :final representation is: (A-HAPPINESS (PLANNER MEIXXM)) (DO-~ (ACTOR MELISSA)) (MENT-ST (ACTOR MELISSA) ONJJE pos)) (A-HAPPINESS (PLANNER TYLER)) 143 (Do-x (ACTOR TYLER)) (MENT-ST (ACTOR TYLER) (VALUE ~0s)) (MENT-ST (ACTOR TYLER) (VALUE ~0s)) becomes the (MTRANS (ACTOR TYLER) (MOBJ PROPOSE-MARRIAGE) (FROM TYLER) (TO MELISSA)) bm.ms (ACTOR MELISSA) (MOBJ REJECT) (FROM MELISSA) (~0 TYLER)) (MENT-ST (ACTOR TYLER) (VALUE NEG)) (MENT-ST (ACTOR MELISSA) (VALUE NEG)) (EXPEL (ACTOR MELISSA) (OBJECT TEARS)) STRATEGIST finishes processing the story and prints the inferences it has made. The first two inference chains listed above show that both Tyler and Melissa had goals of achieving happiness, but their goals were unfulfilled. The last chain indicates the order of actual events as STRATEGIST inferred them: that Tyler proposed, Melissa said "no", both were unhappy, and Melissa cried. STRATEGIST can be told to apply the Recency strategy by changing the value of a parameter. This in turn invokes a process which postpones the processing of input until the end of the input is detected. We now examine the operation of STRATEGIST as a Recency: :processing as recency :the story is: (EXPEL (ACTOR MELISSA) (OBJECT TEARS)) bfTuNs (ACTOR TYLER) (MOBJ PROPOSE-MARRIAGE) (To mLIssA) (FROM TYLER)) :processing next concept: (EXPEL (ACTOR MELISSA) (OBJECT TEARS)) :leaving loose end :processing next concept: (MTRANS (ACTOR TYLER) (MOBJ PROPOSE-MARRIAGE) (TO MELISSA) (FROM TYLER)) :leaving loose end :end of input story Behaving as a Recency, STRATEGIST postpones high-level inference generation until no input remains to be processed. It then begins to generate inferences from the input conceptualizations in last-in-first-out order. :processing previous loose end: (MTRANS (ACTOR TYLER) (MOBJ PROPOSE-MARRIAGE) (TO mLIssA) (FROM TYLER)) :attempting inference generation :inferring from: TYLER :no context found :default inference selected: (A-HAPPINESS (PLANNER TYLER)) (Do-x (ACTOR TYLER)) The default inference for TYLER context for further inferencing :inferring from: PROPOSE-MARRIAGE :context found :possible inferences are: (MTRANS (ACTOR TYLER) (MOBJ PROPOSE-MARRIAGE) (FROM TYLER) (TO MELISSA)) (MTRANS (ACTOR MELISSA) (MOBJ ACCEPT) (FROM MELISSA) (TO TYLER)) (MENT-ST (ACTOR TYLER) (VALUE Pas)) (MENT-ST (ACTOR mdssA) (VALUE ~0s)) (MTRANS (ACTOR TYLER) (MOBJ PROPOSE-MARRIAGE) (FROM TYLER) (TO MELISSA)) Wmms (ACTOR MELISSA) (MOBJ REJECT) (FROM MELISSA) (TO TYLER)) (MENT-ST (ACTOR TYLER) (VALUE NEG)) (MENT-ST (ACTOR MELISSA) (VALUE NEG)) :found'matching inference :resulting merged inference is: (A-HAPPINESS (PLANNER TYLER)) (MTRANS (ACTOR TYLER) (MOBJ PROPOSE-MARRIAGE) (FROM TYLER) (TO MELISSA)) (MTRANS (ACTOR MELISSA) (MOBJ ACCEPT) (FROM MELISSA) (TO TYLER)) (MENT-ST (ACTOR TYLER) (VALUE Pas)) (MENT-ST (ACTOR MELISSA) (VALUE POW STRATEGIST now finds the same two possible inferences for PROPOSE-MARRIAGE that it did while behaving as a Perseverer, but this time finds that the previously made explanatory inference that Tyler wanted to be happy connects with the possible inference that Melissa accepts Tyler's proposal and both actors are happy. The resulting connected inference chain is stored in memory and serves as the context for later inferencing. Contrast this with STRATEGIST's behavior as a Perseverer, in which it was inferred that Melissa rejected Tyler's proposal and was unhappy because of the existing context that Melissa was crying because of her unhappiness. :end of inference generation :processing previous loose end: (EXPEL (ACTOR MELISSA) (OBJECT :attempting inference generation TEARS)) :inferring from: MELISSA :context found :possible inferences are: (A-HAPPINESS (PLANNER mLIssA)) (DO-x (ACTOR MELISSA)) (MENT-ST (ACTOR MELISSA) (VALUE ~0s)) :found matching inference 144 :resulting merged inference is: (A-HAPPINESS (PLANNER TYLER)) W~RANS (ACTOR TYLER) (MOBJ PROPOSE-MARRIAGE) (FROM TYLER) (TO MELISSA)) (A-HAPPINESS (PLANNER MELISSA)) (MTRANS (ACTOR MELISSA) (M~BJ ACCEPT) (FROM MELISSA) (TO TyLERI) WNT-ST (ACTOR TYLER) (VALUE ~0s)) WENT-ST (ACTOR MELISSA) (VALUE pas)) STRATEGIST finds that the default goal of Melissa’s wanting to be happy coincides with the existing context. :inferring from: TEARS :context found :possible inferences are: (DO-x (ACTOR ?ACTOR~)) (MENT-ST (ACTOR MELISSA) (VALUE NEG)) (EXPEL (ACTOR MELISSA) (OBJECT TEARS)) (DO-X (ACTOR ?AcT0~0)) (MENT-ST (ACTOR MELISSA) (VALUE pas)) (EXPEL (ACTOR MELISSA) (OBJECT TEARS)) :found matching inference :resulting merged inference is: (A-HAPPINESS (PLANNER TYLER)) (MTRANs (ACTOR TILER) (MOBJ PROPOSE-MARRIAGE) (FROM TYLER) (~0 MELISSA)) (A-HAPPINESS (PLANNER MELISSA)) (MTRANS (ACTOR KELISSA) (MOBJ ACCEPT) (FROM MELISSA) (TO TYLER)) WNT-ST (ACTOR TYLER) (VALUE pas)) WNT-ST (ACTOR MELISSA) (VALUE pas)) (EXPEL (ACTOR MELISSA) (OBJECT TEARS)) Here, STRATEGIST finds that the possible inference of Melissa’s crying tears of joy coincides with the existing context. Once again, contrast this with the Perseverer behavior above in which STRATEGIST selected the default inference of Melissa’s crying tears of sadness. :end of inference generation :end of processing :final representation is: (A-HAPPINESS (PLANNER TYLER)) (MTRANs (ACTOR TYLER) (MOBJ PROPOSE-MARRIAGE) (FROM TYLER) (TO MELISSA)) (A-HAPPINESS (PLANNER MELISSA)) WrRANs (ACTOR MELISSA) (MOBJ ACCEPT) (FROM MELISSA) (TO TYLER)) (MENT-ST (ACTOR TYLER) (VALUE ~0s)) (MENT-ST (ACTOR MELISSA) (VALUE ~0s)) (EXPEL (ACTOR MELISSA) (OBJECT TEARS)) STRATEGIST ends processing, its final representation indicating that all inferences were connected. The inferred explanation for the events related to STRATEGIST is that both Tyler and Melissa wanted to be happy, Tyler asked Melissa to marry him, Melissa said “yes”, both actors were happy, and Melissa cried. 4.0 Interesting Observations Behavior of Recencies and Perseverers is notable because it supports the theory that the strategies use the same component processes to interpret text. For example, both strategies see a single interpretation of the text. In some cases, when the alternate interpretation is pointed out, subjects will protest that the alternate interpretation is implausible, based on the way events were presented, regardless of the strategy they employed. Our experiments also indicate that readers using the two different strategies will reverse their interpretation of events if the order of events in the text is reversed. Readers using either strategy can be forced to switch to the opposite strategy. For example, the typical experimental method for studying inference decisions presents a text to the subject a sentence at a time, and asks the subject what inferences were made after each sentence. If a Recency is given text one line at a time, so that no cues about the existence of further events can be used, his interpretation will be the same as a Per severer’s, even for those stories which would normally result in a different interpretation. Thus, the data collected will not reveal the different strategies. It is only when subjects are allowed to read a full text, and not forced to make inferences by the experimenter, that the different strategies can be observed. In fact, previous researchers (e.g. Rumelhart, 1981; Seifert, Robertson, & Black, 1982) have used a line-at-a-time methodology for studying when and which inferences will be made. In Rumelhart’s (1981) experiments, subjects read a text a sentence at a time. After each sentence, the subject was asked for his current interpretation of the text. When Rumelhart compared the interpretations of the texts by subjects who 145 read the text a sentence at a time to the interpretations of subjects who read the full text and were not asked for their interpretations until after completing the text, he found that the subjects who read the texts all at once "showed somewhat more variability in their interpretations", which he attributed to "more careless reading on the part of the subjects offering an interpretation only at the end" (Rumelhart, 1981). What happens when one's usual strategy cannot be used? It is possible that the Inference Manager has several sets of rules from which to choose, and that other sets of rules are invoked when the "default" set fails. For example, a Perseverer who doubted his initial interpretation would use the Recency strategy to discover a new interpretation. Our experimental evidence regarding new interpretations in response to requestioning (Granger & Holbrook, 1983) leads us to reject this hypothesis. Instead, we theorize that an individual's Inference Manager has only one set of rules, certainly more complicated than those which we have described, with many "if/then/else" alternatives. One might suspect that the only difference between the two strategies is which inference is chosen as the default inference. The evidence does not support such a theory; if a Recency were making the original default inference after the first concept was presented, but also making default inferences for later concepts and simply choosing the later concepts when defaults conflict, then Recencies would presumably have little trouble recognizing the Perseverer's interpretation as an alternative interpretation. As discussed earlier, this is not the case, nor have reaction-time tests on false recognition items suggested otherwise (Granger & Holbrook, 1983). Perseverers and Recencies are only two points in a range of strategies. An extreme Perseverer makes inferences based on a preconceived context. This strategy is a kind of paranoid understanding. An extreme Recency will not make inferences, and will not be able to understand text which requires any higher-level inferences. Still other readers exhibit behavior akin to both Recency and Perseverer behavior. We call these readers Deferrers; at present, little is understood about the strategies used by Deferrers. 5.0 Summary and Conclusions Our prototype model is far from finished. One limitation is that it uses simplistic representations. For example, the representations do not include knowledge about which plans are appropriate for a goal, nor do they include knowledge about the possible conditional outcomes of plans. Questions re-presented to STRATEGIST will not result in another interpretation of events. These are all extensions to the system which are planned for the future. STRATEGIST is primarily a model of human understanding. There are still many questions to be answered about how people interpret text. Our experiments have yet to reveal all of the different strategies used by readers. We have studied evaluation metrics and processes, as well as some of the rules which apply the processes. Future work will focus on specifying more rules, and more carefully defining and ordering those rules which we have described here. We also hope to study the application of these strategies applied to longer texts of many different genres. This work will not only involve observation of human subjects; the extended STRATEGIST will be a test-bed which will allow us to study inference processes and new rules which apply those processes. We have presented evidence for processes of story comprehension which include the set of rules used by most story understanding programs, and an additional set of rules which accounts for interpretations which these programs would not be able to make. REFERENCES Graesser, A.C. A question answering method of exploring prose comprehension: an overview. Proceedings of the Third Annual Conference of the Cognitive Science Society, Berkeley, California, 1981. Granger, R.H. When Expectation Fails: Towards a self-correcting inference system. Proceedings of the First National Conference on Artificial Intelligence, Stanford, California, 1980. Granger, R.H. Directing and Re-Directing Inference Pursuit: Extra-Textual Influences on Text Interpretation. Proceedings of the -- Seventh International Joint Conference on Artificial Intelligence, Vancouver, British Columbia, 1981. Granger, R.H., & Holbrook, J.K. Perseverers, Recencies, and Deferrers: new experimental evidence for multiple inference strategies in understanding. Proceedings of the Fifth Annual -- Conference of the Cognitive Science Society, -- Rochester, New York, 1983. Rumelhart, D.E. Understanding understanding (Report No. 100). San Diego, California: Center for Human Information Processing, University of California at San Diego, 1981. Schank, R.C. & Abelson, R.P. Scripts, Plans, Goals, & Understandi=. Hillsdale, New Jersey: Erlbaum, 1977. Schank, R.C., Collins, G.C., Davis, E., Johnson, P.N., Lytinen, S., & Reiser, B.J. What's the Point? Copnitive Science, 1982, a, 255-275. Seifert, C.M., Robertson, S.P. 6 Black, J.B. On-Line processing of pragmatic inferences (Report No. 15). New Haven, Connecticut: Cognitive Science Program, Yale University, 1982. Wilensky, R. Understanding goal-based stories (Report No. 140). New Haven, Connecticut: Department of Computer Science, Yale University, 1978. Wilensky , R. Planning and understanding. Reading, Mas sachusetts: Addison-Wesley, 1983. 147
1983
80
278
Planning and Goal Interaction: The use of past solutions in present situations Kristian J. Hammond Department of Computer Science Yale University New Haven, Connecticut 06520 Abstract This payer presents WOK, a cased-based planner that makes use of memory structures based on goal interactions. WOK generates original plans, (which take the form of recipes), in the domain of Szechuan cooking, by modifying existing plans that are stored and then retrieved on the basis of the goal interactions that they deal with. The paper suggests an organization and indexing strategy that allows the retrieval and use of plans that overarch sets of goals rather than just individual goal situations. It also demonstrates how episodic knowledge can be used to guide planning and avoid past failures.* 1 Introductiou - Anyone can plan At this point in time, nearly anyone in Artificial Intelligence can write a planner. The basic ideas exist as part of the knowledge we have as a field. The ideas that goals have plans associated with them; that some plans are stored as sets of coordinated steps or sub- plans; and that the achievement of most goals is in fact the achievement of many sub-goals all exist as givens in the field. What does not exist is the knowledge of how to deal with planning situations in which multiple interacting goals have to be planned for. While much work has been done in the identification of goal interactions such as CONFLICT, COMPETITION and CONCORD, [7], little to date has been done to incorporate these ideas into the knowledge system of a planner. The following then is a description of WOK (Well Organized Knowledge), a planner that makes use of goal interaction knowledge to categorize and organize plans in terms of the interactions that they deal with. WOK generates original plans in the domain of Szechuan cooking, by modifying existing plans that are stored and then retrieved on the basis of goal interactions specific to the domain (e.g. CONFLICTING TASTES:ONE-DOMINATES and CONTRASTING TASTES:E&UAL-FOOTING) rather than on the basis of specific contextual goals that the plans meet (e.g. the goal to include chicken or the goal to avoid hot tastes). WOK’s processing is guided by the top down application of its memories of past successes and failures, stored in terms of the goal interactions that they deal with. 2 Planners and Plan Interaction The problem of planning has been worked on since the earliest days of AI ( [3], [l], [4]), b u most of the stress has been on systems t that plan for a single goal. Such systems are concerned primarily ““““““““““- * This report describes work done at the Department of Computer Science at Yale University. It was supported in part by National Science Foundation grant IST 8120451. with means/ends analysis, reasoning about the sub-plans and sub- goals that must be accomplished in order for the top level goals to be satisfied. Some such systems have included critics or constraint mechanisms to deal with goal interactions such as goal conflict and goal subsumption [4] or include explicit references to particular interactions in their rule bases [s]. In the case of Sacerdoti’s NOAH, the planner had to create a faulty plan before applying a special case critic to correct the fault, while in the case of Shortliffe’s MYCIN, each case of interaction had to be coded into a specific rule. More recently models have been developed that deal not with single goals and plans, but with sets of goals that have to be planned for in terms of their possible interactions ( IS], [2]). Bob Wilensky has proposed a model that expands a simple planner to deal with meta-level goals that arise when goal interactions lead to difficulties. When a problem having to do with some goal interaction is noticed, such as a conflict between the goal to get the newspaper on a rainy day and the goal to stay dry, the resolution of the conflict itself is made into a goal and the planner goes to work on trying to satisfy it. EIayes-Roth and Hayes-Roth on the other hand propose a system in which plan interaction is dealt with by starting with a plan and opportunisticly performing plans for the satisfaction of active goals as they arise. Wilcnsky’s model, while it does try to integrate the knowledge of meta-plans with the lower level knowledge concerning simple plans for single goals, still assumes that the planner wiil first generate individual plans for each goal and then notice the interactions. The planner then deals with meta-level goal interactions (such as the conflict between the paper and the rain mentioned above) by performing a simulation of the various plans for each of the individual goals in the interaction, until it finds a combination that satisfies all of the goals in that interaction. The Hayes-Roths’ model addresses positive plan interaction, such as the piggy-backing of plans when the outcome of one plan aids in the satisfaction of some other active goal (as in the performance of one errand when another brought the planner to an advantageous location). It is not capable, however, of drawing on top-down knowledge to deal with the negative interactions such as conflict and competition. In the main, these systems make use of a search though the space of combinations of individual plan interactions to find a single complex plan that will cover all or most of the goals that the system is trying to satisfy. The final plan is found after either an examination of many of the possible plan combinations, or by simply stumbling over an appropriate plan in the course of execution. While Wilensky does make use of goal interaction information to spawn new meta-goals for his planner to work on, he does not make use of this information to search for plans to achieve his original goals. 148 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. 3 WOK - A different sort of planner where both tastes are of equal strength (CONTRASTING TASTE:EQUAL-FOOTING) because it deals with a situation in which that interaction has been incorporated into a successful plan, in th3t it includes both the ssvory PORK and the sweet HOISIN. ANISE CIIICKEN is stored under a similar structure for conflicting tastrs where one taste overpowers the other (CONFLICTING TASTE:ONE-DOMINATES) b ecause it deals with this interaction by including both the salty taste of SOY-SAUCE and the licorice taste of STAR-ANISE. This indexing is by far the most important of the three when dealing with interacting goals in that it allows the use of recipes that solve problems which are are anslogous to a current processing situation rather than just those that include the same ingredients or tastes. 3.3 The structure of interactions WOK uses the categories defined by taste interactions 3s knowledge structures under which are stored plans that have been developed to cope with the interactions themselves. Rather than stsrting with a set of goals and planning for each until some goal interaction blocks norm31 processing, WOK begins by trying to identify the interactions between the goals it is planning for and se3rching for plans designed to deal with those interactions. The knowledge structures used 3re domain specific versions of Schank’s Thematic Organization Packets or TOPS [*5] and owe a great deal to the categorization proposed by Wilensky [7]. The categories themselves are made up of two components. First there is the relationship between the t3stes alone, which includes interactions such as CONTRASTING-TASTE, CONFLICTING-TASTE and AGREEING-TASTE. There also is the effect of the interaction, which includes ONE-DOMINATES, EQUAL-FOOTING and BLENDING. 3.1 Planning for rather than against interactions The WOK project takes a somewhat different approach to planning. The program makes use of its knowledge of goal interactions to organize plans and tries to make use of plans that overarch and satisfy a set of goals rather than find a plan for each individual go31 and try to combine them. WOK is designed to be an interactive program that is given a set of constraints by the user and then provides a recipe that meets them. The constraints come in the form of requests for certain ingredients, (i.e. chicken, water chestnuts, scallions), particular tastes, (d.e. hot, spicy, bland), and textures, (i.e. crunchy, gelatinous, chewy). The output is a natural language description of the dish and the recipe that has to be followed to make it. WOK functions by finding an existing plan or recipe that in some sense fits its current set of goals, and modifying it to provide a plan that precisely matches those goals. WOK does not ever try to build up a recipe from scratch. It instead is reminded of old recipes that deal with situations analogous to the one it is working on and alters them to fit its current needs. 3.2 Knowledge and the organization of knowledge WOK begins with a knowledge base of recipes; descriptions of ingredients; descriptions of physical scenes that correspond to preparation and cooking steps such as CHOP or STIR-FRY as well as a set of primitive alteration steps that allow it to ADD, REPLACE and REMOVE ingredients from a recipe. Recipes take the form of sequential orderings of preparation and cooking steps which include listings of all ingredients used in the plan. Beyond this, the initial recipes include information pointing out the interesting tastes and textures that result from the execution of the plan (this normally includes the major ingredients as well as any strong spices) and information about any interesting interactions of tsstes that occur in the recipe (i.e. the contrast between the savory taste of pork and the sweet taste of hoisin in PORK SHREDS WITH HOISIN or the conflict that is dealt with between the licorice taste of star anise and the salty taste of the cooked down soy sauce in ANISE CHICKEN). Recipes stored in memory are indexed in three different ways. Each is indexed under the important foods and tastes that are included in the recipe, making it possible to retrieve a recipe for an individual ingredient or taste, such as CHICKEN or GINGER. Recipes are also stored under the individual preparation and cooking steps that they make use of. This makes it possible to retrieve recipes on the basis of some particular step that might be important in the current situation, such 3s finding a recipe that includes the MIX and FORM steps necessary to any ground meat dish. WOK makes use of nine separate categories of interaction and effect. The more important of these includes: o CONTRASTING TASTE : ONE DOMINATES (ex. hot and savory) e CONTRASTING TASTE : EQUAL FOOTING (ex. sweet and sour) e CONFLICTING TASTES : ONE DOMINATES (ex. soy sauce and holsln) o CONFLICTING TASTES : EQUAL FOOTING (ex. garlic and hoisin) * DIFFERENT TASTES : BLEND (ex. garlic and nuts) e DIFFERENT TASTES : BALANCED TASTES (ex. pork and trees ear) These two means of indexing, while useful in the case of single goal request,s such as those for just chicken or just ground beef, are of little value when a set of taste and ingredient goals are to be satisfied. Once the system has to deal with goals in interaction, it is important that it be able to retrieve plans on the basis of something other than a single ingredient or step. Given a request for a dish that includes chicken and oranges for example, it is less important to look at all of the chicken dishes or all of the recipes that have oranges, than it is to find a recipe that has already confronted the problem of combining a savory and sweet taste of equal strength. Because of this, all recipes are indexed under the taste interactions that they deal with. Thus PORK SIRREDS WITH HOISIN is stored under a structure that contains knowledge about contrasting tastes Esch of these structures organizes four sorts of planning information relevant to the particular interaction. o Specific plans that deal with the interaction (i.e. particular recipes that handle taste conflicts or instances of an over abundance of agreeing tastes). e General strategies for dealing with the interaction (i.e. add a contrasting taste to undercut a conflict between two others, spice up a basicly homogeneous dish that is weak or add a dull tasting buffer to a dish that has become too strong). o Indexing information about what aspects of the plan are important to the interaction and should thus be used in storage and retrieval (i.e. index by the dominating taste in the interaction or presence of other interactions). o Instances of failures of plans and strategies that possible plan, because it is under here that that plans dealing with the sort of interaction exemplified by the relationship between the chicken and orange are stored. 4.3 Indexing and search are used to avoid similar mistakes. (i.e. Recipes that simply didn’t taste particularly good) Once an interaction is identified, it guides the processing that follows. In the case of CONTRASTING-TASTE:EQUAL- FOOTING, the structure itself knows that both of the tastes are 4 WOK - An example important to use in indexing, and that acceptable recipes include those that match both tastes in the current interaction or those that match one ingredient of the interaction but not both tastes. This 4.1 The basic algorithm contrasts with the structure CONTRASTING-TASTE:ONE- DOMINATES which knows that the dominant taste is more The processing in WOK can be broken down into four major steps. First, the system must sort the goals given to it by the user and identify the taste interactions that will be useful to it in finding a plan to deal with that combination of goals. Second, it uses the abstract interaction, and the particulars of the goals to find a plan that overarches a major segment of the goals as well as the effects of their interaction. Third, the plan is modified to fit all of the current user goals. Fourth, the interactions of new tastes that have resulted from the modifications are checked for their similarity to past failures, and are changed if necessary. Once this is done the new recipe is indexed into the existing data base in terms of the new interactions that are dealt with by the plan. What each of these steps actually means can be best seen through a look at excerpts from one example of the system trying to build a recipe for the user request of a dish with CHICKEN and MANDARIN ORANGES. In the rest of this paper, boldfaced type will be actual output from the program itself. 4.2 Sorting goals and identifying interactions important than the dominated and that this must be matched for a recipe to be used. In the case of the CHICKEN and ORANGE example, the recipe that is found, PORK SHREDS WITH HOISIN SAUCE, is indexed by the tastes alone, savory and sweet, under the interaction CONTRASTING-TASTE:EQUAL-FOOTING. The goals are ranked as follows: Goal to include chicken Goal to include mandarin orange Searching for recipes including direct relations between ingredients and tastes. Searching CONTRASTING-TASTE:EQUAL-FOOTING I have found PORK SHREDS WITH HOISIN SAUCE. The recipe includes: Bork, scallion, soy sauce and hotsin. The recipe was chosen because the relation in which the savory taste of the pork contrasts with the sweet taste of hoisin sauce is similar to the relation between the two goals to include chicken and to include mandarin orange . The goals that the system is given are sorted by their relative importance in the present data base. Given that each recipe begins with knowledge of which tastes and ingredients are important, as well as which participate in the interactions that the recipe handles, it is easy to assess the relative importance of two ingredients or tastes by a comparison of the extent to which they have been important in the past. The importance of a taste or ingredient then is a dynamic value that is the function of how many recipes it is included in, how often it is considered important to those recipes by itself, and how often it is important when interacting with other tastes in those recipes. This last aspect of a taste or ingredient’s importance tends to be the most significant, in that most of the indexing is done in terms of interactions, and a taste that has many interesting interactions with others will provide a richer set of indexing possibilities. For example, the hot taste of RED-PEPPER interacts with many more tastes in the recipes that the system begins with, than the taste of CHICKEN. A user goal to have RED PEPPER then would be considered more important that one to have CI IIC’KEN. 4.4 Modification of plans Once a recipe is found, it has to be modified to cover the goals of the original request. As in the case of this example, the recipe may not include any of the ingredients requested, though it does handle the effect of the interaction between them. Given that the system knows how it found the recipe, it also knows what the mapping between the original goals and the existing plan is. The fact that it found the recipe because of the similarity between the interaction of the PORK and HOISIN and that of the CHICKEN and ORANGE gives it the information that the CHICKEN and ORANGE map directly onto the PORK and IIOISIN. This information allows it to know to do a simple replacement in this case. In cases where 3 new ingredient does not satisfy all of the important goals of the one being removed the system has to add a further ingredient that will satisfy the now active goal. following goals still have Goal to include chicken Once the goals are sorted, the most important ones are compared in a pairwise fashion, and the interaction between them is found. This is done by a simple table look up of the combination of tastes associated with the goals. In the example of the chicken and oranges, the interaction is CONTRASTING-TASTE:EQUAL- FOOTING, which is to say, the tastes contrast, and they are of the same intensity. This memory structure is then searched for a Goal to include mandarin orange Mapping chicken to pork...Replacing Mapping mandarin orange to hoisin sauce...Replacing. 4.5 New interactions and avoiding past failures Once a new recipe is built, the system examines the new taste interactions that have developed, in an effort to avoid any mistakes that it has made in the past. It does this by looking at the interactions between the important tastes in what remains of the original recipe, (as stated before, these are explicitly noted in the recipe itself), and the important tastes, (in the global sense of having been important in past recipes), that have been added. The purpose of this step is twofold. First, the new interactions have to be understood by the system so that it can index the new recipe in terms of them. More importantly for the recipe itself, the system is also making sure that no new interaction has come about that is similar to any past failure. Failures are actual recipes that do not succeed in dealing with the interactions they include. If such a failure is found (by going though the same sort of search that was used to find the original recipe, but with the interactions in the current recipe rather than the user goals), then the interaction associated with it is again used to provide a new plan to deal with the problem. In this example, a problem is discovered with the new ORANGE and the old SCALLION, and the resulting interaction, CONFLICTING-TASTES:ONE-DOMINATES provides an alternate plan of finding a new dominating taste for the ORANGE. The new taste is found by looking for a contrasting taste in any recipe, but the search is guided by the context of the current situation. Thinking about the relations between the following tastes: The savory taste of the chicken. The sweet taste of the mandarin orange. The fresh, vegetable taste of the scallion. There Is a problem with the relationship hetween the Mandarin orange and the Scallion. I am reminded of the failure in the ORANGE AND OLIVE SALAD. In the ORANGE AND OLIVE SALAD the relationship between the Onion and the Mandarin orange was considered to be a failure. In this recipe - The fresh, vegetable taste of the onion contrasts with and dominates the sweet taste of the mandarin orange. The plan used in this recipe - STAND-ALONE - is the same as the plan in the failed recipe. Trying general plans From TOP CONTRASTINGTASTES:ONE-DOMINATES TOP has 2. plans: STAND-ALONE and REPLACE-OBI Plan STAND-ALONE has already failed in the recipe for ORANGE AND OLIVE SALAD. Avoiding past failure - Trying REPLACEQBl - This plan is to replace the dominating taste with another. The relation is preserved, with B new taste. Plan is to replace the ScalEon. Looking for item in TOP CONTRASTINGTASTES:ONE-DOMINATES Found a possible contrast - Red pepper. In CHICKEN WITH PEANUTS - The hot and spicy taste of the red pepper contrasts with and dominates the savory taste of the chicken. Replacing the Scallion with Red pepper. Once all new interactions have been validated, the indexed in the data base in terms of those interactions. 5 Conclusions new recipe is By organizing plans by the goal interactions that they deal with, the 1VOK planner is able to access and use complex plans that overarch a set of goals, rather than just meet single goals. The memory structures related to these interactions are able to guide the planning process in many ways. They provide instances of both successful plans and failures, general strategies for dealing with the related goal interaction and indexing information which guides search and the later mapping between current problems and past solutions. This information allows the system to find appropriate plans on the basis of goal interactions, modify them to meet current constraints, avoid repeating past failures and make use of 3 various number of grnr,rxl planning strategies that are organized applicable to the specific interaction. The system avoids the use of special purpose critics and time consuming simulations by anticipating the interactions between goals and finding an overarching plan rather than waiting for the problems of interactions to interrupt processing. r11 [21 131 [41 [51 [a [71 [81 Iteferences Fikes, R., and Nilsson, N.J. #STRIPS: A new approach to the application of theorem proving to problem solving.' Artificial Intelligence $ 1971. Hayes-Roth, B., Hayes-Roth, F. 'A Cognitive Model of Planning" Cognitive Science 3(4), 1979. Newell, A., Shau, J.C., and Simon, H.A. 'Report on a general problem-solving program" In Proceedings of the international Conference on Information Processing. UNESCD, UNESCO House, Paris, 1959. Sacerdoti, E.D. 'A structure for plans and behavior.m Technical Report 109, SRI Artificial Intelligence Center, 1975. Schank, R.C. Dynamic Memory: A theory of learning in computcra and people. Cambridge University Press, 1982. Short I iffe, E.H. Computer-baaed medical consultations: MYCIN. American Elsevier, New York, 1976. Wilensky , R. Understanding PhD thes is, Yale University Goal-Baaed Stories. , 1978. Research Repor t #140. Wilensky, R. AlETA-PI,ANNING. Techn i ca UCB College of Engineering, August 1980. I Report W80 33,
1983
81
279
MODELING HIJMAN KNOWLEDGE OF ROIJTES: PARTIAL KNOWLEDGE AND INDIVIDUAL VARIATION Benjamin Kuipers Department of Mathematics Tufts University Medford, Massachusetts 02155 Abstract Commonsense knowledge of large-scale space (the cognitive map) includes several different types of knowledge: of sensorimotor, topological, and metrical spatial relationships. Sensorimotor knowledge is defined as that knowledge which is necessary to reconstruct a route from memory after travel along that route in a large-scale environment. A representation for route knowledge is proposed with sufficiently robust performance properties to be useful as commonsense knowledge. Its states of partial knowledge are shown to correspond to those observed in humans. We also define and explore the space of all possible variants of this representation, to derive empirical predictions about the nature of individual variation. 1. Introduction This paper presents part of a computational theory of spatial knowledge. We focus our attention on knowledge of large-scale space: space whose structure cannot be perceived from a single vantage point, and which is learned by integrating local observations gathered over time. There are three major categories of spatial knowledge: Sensorimotor Procedures: Knowledge of a set of actions, and their sequence, required to travel from one place to another. Topological Relations: Knowledge of non-metrical properties of the external environment, such as containment, connectivity, and order. Metrical Relations: Knowledge of, and the ability to manipulate, magnitudes such as distance, direction, and relative position. This paper concentrates on spatial knowledge of the first type. The other types of knowledge in the cognitive map also exhibit interesting behavior and structure, but they are discussed elsewhere [Kuipers 1978,1982]. The structure of the representation is motivated by empirical observations of the states of partial knowledge exhibited by human spatial knowledge. Knowledge of large-scale space is an attractive, accessible domain for the study of knowledge representations because changes caused by acquiring and assimilating new observations must take place slowly, constrained by the speed of physical travel. The intermediate states of knowledge are particularly long-lived and visible. compared to more rapid cognitive processes such as vision and language understanding. 2. The Problem: Sensorimotor Knowledge of Space The most fundamental information processing problem solved by the cognitive map is to store a description of a route travelled in the environment so that it can be reconstructed later. Even with this apparently simple kind of spatial knowledge, there are interesting states of partial knowledge that reveal the structure of the representation. One of the most interesting is indicated by the familiar response, “I could take you there, but I can’t tell you how! ” Physical presence in the environment makes an important difference to the way information can be retrieved. Lynch (1960) observed styles of navigation that depend on the environment to evoke the next action. Similarly, in a study of experienced taxi drivers, Chase (1982) found that routes selected while travelling in the environment were better than routes selected during a laboratory interview. Another type of partial knowledge is the asymmetrical storage of apparently symmetrical spatial relationships. Piaget, Inhelder, and Szeminska (1960) observed that young children are frequently able to follow a route correctly from beginning to end, but are unable to travel the same route in reverse, or to start it in the middle. Hazen, Lockman, and Pick (1978) studied this effect in detail, and found that their young subjects were able to travel a well-learned route in reverse, but while they could anticipate up-coming landmarks in the original direction, they could not do so in the reverse direction. We can formalize the “I could take you there, but I can’t tell you how!” effect by observing that the human knowledge representation is able to express a state of knowledge capable of solving problem 1 but not problem 2, for the same route. Problem 1: Assimilate knowledge of a route by travel in the environment, then reconstruct the route from memory while traveling in the environment. Problem 2: Assimilate knowledge of a route by travel in the environment, then reconstruct the route from memory in fhe absence of the environment. In order to specify these problems precisely, we must define the 216 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. inputs received during assimilation and the outputs provided during recall. We will describe the sensorimotor world of the traveller in terms of two types of objects, views and actions. A view is defined as the sensory image received by the observer at a particular point, and may include non-visual components. The internal structure of a view, while undoubtedly complex, is not considered at this level of detail. The only operation allowed on views is comparison for equality. An action is defined as a motor operation that changes the current view, presumably by changing the location or orientation of the traveller. For the present purposes, the only operation allowed is comparison for equality. Views and actions are egocentric descriptions of sensorimotor experience, rather than descriptions of fixed features of an external environment. The actual environment, and the sensory system for observing it, are assumed to be rich enough so that each different position-orientation pair corresponds to a distinguishable view. When this assumption is false, as for the blind traveller or a stranger lost in the desert, we can model the consequences by positing some frequency of false positive matches between views. The observations during travel, then, can be defined as a temporal sequence of sensorimotor experiences consisting of alternating views and actions. V. A1 V1 . . . A, V, Reproduction of the sequence can be accomplished either by performing the actions to travel the correct route in the environment, or by recalling the views and actions from memory and expressing them verbally. 3. A Representation for Sensorimotor Routines Knowledge of sensorimotor routines is represented in terms of two types of associative links [Kuipers 1979a, 1979b]. The link V- >A has the meaning that when the current the current action should be A to follow the route. view is V, The link (V A) ->V ’ has the meaning that if the action A is taken in the context of view V, the result will be view V’ , A sequence of observations during travel corresponds to a set of associative links of the two different types, as shown in figure 1. If the route description consists of a complete set of both types of links, then the route can be reproduced in the absence of the environment (figure 1). The states of partial knowledge of the representation consists of exactly the subsets of the complete set of links. The full description of a route with n actions consists of 2n links, so there are 2”’ possible partial descriptions. Of course, some are behaviorally more distinctive than others. The sequence of observations: VO A1 V1 - . . ‘Jn-1 An Vn The set of associative links: V. -> A1 W. Al) -> V1 O’n-1 An) -> Vn Figure 1. The sensorimotor routine representation allows a sequence of views and actions to be reconstructed from links of type V->A and links of type (V A)->V’. The first V->A link allows the first action A0 to be retrieved, given the starting point Vo. The (V. A1 ) ->V1 link allows the predicted result of that action to be retrieved from memory. Another link of the first type, VI->A*, can then be retrieved to specify the next action, and so on, to the end of the route. If the route description consists entirely of links of the V->A type, the route can still be followed, but only while travelling physically in the environment. The environment itself contains information equivalent to the link (V A) - >V ’ , since it will always reveal the result of performing an actlon in a particular context. Thus, this representation for sensorimotor routines is capable of expressing a state of knowledge that solves Problem 1 but not Problem 2, as required. It also exhibits the directional asymmetry of route descriptions observed in young children. There is a simple learning theory that explains why routes conststing of V->A links but not (V A) ->V ’ links are likely to arise. Consider the two rules: (Rl) If working memory holds the current view V and the action A, then store the link V- >A in long-term memory. current (R2) If working memory holds a previous view V, the action A taken there, and the resulting view V’ , then store the link (V A) ->V ’ in long-term memory. The working memory load required for R2 is clearly greater than that for Rl, and it is required during the time needed to carry out action A. In travel through a large-scale space, actions can take many seconds, greatly increasing the probability of an internal or external interruption that would destroy the contents of working memory, preventing rule R2 from succeeding. Rl, of course, is much less vulnerable to interruptions and resource limitations. Since the representation supports assimilation of individual links into a partial route description. leading incrementally to a complete description, we say it supports easy /earning; since it supports successful travel even when some links are unavailable, we say it supports graceful degradation of performance under resource /imitaf/ons. Both of these are aspects of the robust behavior we expect of commonsense knowledge. The range of individual variation can be expressed naturally within this representation. We expect that individuals, with different collections of cognitive processes competing for the use of working memory, will vary considerably in the frequency with which rule R2 can run to completion. A second dimension of variation must be the individual choice of “imageable landmarks” (Lynch, 1960) which will affect the selection of views involved in links and their density along a particular route. A third dimension of variation is the selection of types of associative links used to represent the route. This dimension is considered in the next section. Table 1 presents the possible respect to these questions. links and their properties a A V’> Route Working Memory Reconstruction Load During A I. <2 1 o> 1 + 2; 1 + 4; no 2. <201> 1 +2; 2+6; yes 2+7; 2+8; 3. <21 I> 3; yes 4. <221> 1+4; yes 5. <I 02> no yes 6. <o 1 2> 2+6; yes 7. <I 1 2> 2+7; yes 8. <2 1 2> 2+8; yes 4. The Set of Possible Variants 9. <I 22) no yes The representation proposed above for knowledge of routines is expressed in terms of a particular pair of associative links among views and actions. It meets the performance criteria we defined, and exhibits states of partial knowledge corresponding to our observations of human behavior. Assimilation requires a relatively small “window” onto the sequence of observations during travel: at most two views, V and V’ , and the action A that leads from one to the other. The question remains, Are there other solutions to the same problem that meet these constraints? in order to answer this question, we must explore the space of all possible solutions to the route representation problem. We only consider combinations of associative links involving the three adjacent observations V, A, and V ’ , since the assimilation of more complex links imposes prohibitive working memory loads. Each distinct type of link involving these three observations can be represented by a triple of three integers, indicating that the corresponding element in <V A V’ > is: 0 = not involved in the link; 1 = retrieved by the link; 2 = acts as a retrieval key for the link. Thus, for example, the link V->A is encoded as <2 1 O>, and (V A)->V’ isencoded as <2 2 l>. There are twenty-seven possible cases. After removing useless or trivial ones such as <0 0 0> or <2 2 2>, nine potentially useful links remain. For each link, we want to know: (1) what combinations route from memory; of link types can be to reconstruct a Table 1. The possible links for encoding information about a route. There are six viable combinations of one or two link types that will support reconstruction of a route from memory. Link 1 is the only link type that can be stored without a long load on working memory. The route representation previously discussed is coded here as I+ 4. The combinations of link types listed are not the only combinations that might be found in a person capable of following routes from memory, but they are the minimal ones. For example, link type 3 can support route reconstruction by itself, but the combination 1+3 is more robust. Similarly, combination 1+4 has the robustness of the V->A link along with the fact that the (V A) ->V’ links can be interpreted as a context-independent assertion about the result of an action in the environment as well as an instruction in the current route procedure. As we observed above, it is clear that some people occasionally exhibit a state of knowledge that corresponds to link type 1 (V-)A), and to none of the other links. Since link 1 is the only link whose assimilation does not impose working memory overhead during travel, there are also reasons of computational robustness for using that type of link. This tabulation of possible link types and their combinations allows us to make empirical predictions about how each apparently viable combination of links would manifest itself in behavior (Table 2). (2) whether assimilation of the link requires a working memory load to be preserved while the action A is being performed; 5. Conclusion Links Predicted Behavior 1+2 V->A v->V’ 1+3 1+4 3 V->A V->(A V’) V->A (V A)->V’ V->(A V’) Occasional recall of A without V’ . Occasional recall of all Vs with no Occasional recall of A without No recall of V’ without A. Occasional recall of A without No recall of V’ without A. No recall of A without V’ . Working memory vulnerable V’. V’. during A. 2+6 v->v ’ V’->A Occasional recall of all Vs with no As. “Backward” recall of A given V’ . Working memory vulnerable during A. Reference to action A without context V. The representation for knowledge of routes consists of sequences of sensorimotor observations, expressed as egocentric experiences distributed over time. Much of human spatial knowledge, however, is concerned with fixed features of the environment such as places and paths distributed over space. This dichotomy is inescapable, because sensory input necessarily consists of egocentric observations, while a description of the environment in terms of external places and paths is much more effective for problem-solving and for communication via maps and verbal descriptions. This paper has discussed a representation for storing procedural descriptions of routes in the long-term memory of the cognitive map. The remainder of a theory of the cognitive map must show how the route descriptions we have defined can be transformed into descriptions of the environment in terms of places, paths, regions, and their topological and metrical relationships. 2+7 v->V’ V’->(V A) Occasional recall of all Vs with no As. Working memory vulnerable during A. 2+8 v->v ’ (V V’)->A Occasional recall of all Vs with no As. Working memory vulnerable during A. Table 2. The different theoretically viable combinations of link types that support route reconstruction, and some of their behavioral consequences. All of the minimal combinations are presented here, but others, such as 1+3 shown here, can be produced by adding links to a minimal set. Kuipers, B. J. 1978. Science, 2, 129-153. Rather than providing us with a definitive answer as to why one combination of links is used and the others rejected, this tabulation defines the space of possible individual variants. Table 2 shows the predicted behavior corresponding to some of the combinations, including all of the minimal ones. For example, any of the combinations including iink 1 will occasionally show the “I could take you there, but I can’t tell you how!” phenomenon. Similarly, any combination including link 2 should occasionally produce the phenomenon of being able to enumerate the landmarks on a route, but not the actions needed to get from one to the next. This is unlikely to occur in an individual with combination 1+2, since link 2 is always more vulnerable to interruptions than link 1. However, if an individual variant exists with combination 2+8, link 8 is very vulnerable to interruptions, so we would expect an occasional route to be described purely in terms of V->V ’ links. Some of the combinations, such as 2+6 (i.e. V->V’ and V’->A), have such peculiar behavioral consequences that we would not be surprised to find it missing, or at least very rare in the population. On the other hand, we would expect that all of the genuinely viable ways of representing knowledge of routes will exist in the population. A missing variant would suggest that we have overlooked a computational constraint. This approach to the “ecology” of individual variation bears considerable further study. References W. G. Chase. 1982. Spatial representations of taxi drivers. In D. R. Rogers and J. A. Sloboda (Eds.), Acquisition of Symbolic Skills. New York: Plenum Press. Hazen, N. L., Lockman, J. J., & Pick, H. L., Jr. 1978. The development of children’s representations of large-scale environments. Child Development, 49, 623-636. knowledge. Cognitive Kuipers, B. J. 1979a. On representing commonsense knowledge. In N. V. Findler (Ed.), Associative networks: The representafion and use of know/edge by computers. New York: Academic Press. Kuipers, B. J. 1979b. Commonsense knowledge of space: Learning from experience. Proceedings of the Sixth International Joint Conference on Artificial Intelligence. Stanford, CA: Stanford Computer Science Department. B. Kuipers. 1982. The ‘Map in the Head’ Metaphor. Environment and Behavior 74: 202-220. Lynch, K. 1960. The image of the city. Cambridge, MA: MIT Press. Jean Piaget and Baerbel Inhelder. 1967. The Child’s Conception of Space. New York: Norton. (First published in French, 1948.) Piaget, J., Inhelder, B., & Szeminska, A. 1960. The Child’s Conception of Geometry. New York: Basic Books.
1983
82
280
Six Problems for Story Understanders Peter Norvig Division of Computer Science Department of EECS University of California, Berkeley Berkeley, CA 94720 ABSTRACT Story understanding programs have been classified as script-based processors, goal-based pro- cessors or multi-level processors. Each program introduces a new knowledge structure and invents a mechanism to make inferences and manage memory for that knowledge structure. This can lead to a prol- iferation of incomplete, incompatible processing mechanisms. The alternative presented here is to concentrate on the processing mechanism. It is sug- gested that a single inferencing scheme can deal with all knowledge structures in a uniform manner. Six basic problems that such a processor must address are presented and discussed. Introduction Much work in story understanding has been devoted to describing new knowledge structures such as procedures [Winograd72], scripts [Schank77], plans and f oals [Wilensky78], affects [DyerBZ], and plot units Lehnert82]. One is tempted to say that the field of story understanding has “advanced” from script-based processing to goal-based processing to multi-level processing. Each new knowledge struc- ture brings with it a new program embodying a new processing algorithm. Unfortunately, these new knowledge structures are often introduced before the problems of processing the old ones are worked out. This preoccupation with knowledge structures can sometimes lead to programs with impoverished, redundant, or inconsistent processing mechanisms. For example, Wilensky introduced the “explanation-driven understanding” algorithm for dealing with actions, plans, and goals. He also had a separate algorithm for detecting the “point” of a story. Similarly, Lehnert proposes a system with two processin low-level 9 mechanisms side by side: one for making sentence based) inferences, and another for high-level (plot based) inferences. In both cases, each of the two processing mechanisms makes many of the same types of memory fetches and inferences. It would seem more economical to have one mechan- ism serve both jobs. Besides being more economical, a unified processing mechanism would be less ad hoc, and would force the system builder to consider more carefully the difficult problems of a complete memory and inference processor. This work supported by National Science Foundation grant IST-8007045. A program following this unified processing approach has been written, and is undergoing further develop- ment. The program is called FAUSTUS (Frame Activated Unified Story Understanding System). As an example of its capabilities, FAUSTUS can take the input story: Frank hated his job at the factory. He wanted a job where he wouldn’t have to work so hard. He envied his friends who went to college, and didn’t have to work. So Frank quit building cars and enrolled at the local University. However, as a student, he was soon working harder than he ever had in his life. and produce the summary: Frank enrolled in college. He thought being a student would be easy. Ironically, he ended up working harder than ever. This text makes use of knowledge at various levels of complexity: objects, people, institutions, affects, actions, expectations, and so on. In FAUSTUS each of these knowledge structures is represented as a pame, and each is handled by the same basic set of frame manipulation processes. FAUSTUS itself does not deal with English text. Instead, it calls on the PHRAN parser [Wilensky80] and the PHRED generator [Jacobs831 to translate between English and the internal frame representa- tion. FAUSTUS is described in more detail in [Nor- v&83]. Six Problems In this paper I will present six basic problems that must be addressed by any story understander. For each problem a few solutions are discussed, including the solution implemented in FAUSTUS. The emphasis is on solutions that have pervasive effects: that cut across several problems, and several levels of frame complexity. 1. Finding Candidate Fhmes According to Rurnelhart [Rumelhart75], “the process of understanding a passage consists in flnding a schema which will account for it.” While I believe there’s more to understanding than that, the problem of frame-finding is an important one, and will be the first of six discussed here. Consider Charniak’s [CharniakSZ] example: As Jack walked down the aisle he picked up a can of tuna fish and put it in his basket. From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. The problem is how to find the supermarket- shopping frame, even though it was never explicitly mentioned. The solution implemented in FAUSTUS relies on spreading activation and hierarchical com- position. The basic rule is that all parts of all frames can potentially be used to find a frame (that is, we are not restricted to a small set of “triggers” for each frame), but in practice the following system is used: (1) if the input matches one unique frame, then instantiate that frame. (2) If a few frames are matched, consider each one and try to make a choice among them. (3) If the input matches a large number of frames, spread “activation energy” to each frame, and check to see if the total energy exceeds a predefined threshold necessary for instantiation. To put it another way, if the input is unambigu- ous, interpret it that way. If there are a small number of possible meanings, try to decide among them, and if there are a large number, don’t even try to make a choice, unless one of the choices has already been strongly indicated. Currently, “a small number’ ’ is defined as four or less. This triage sys- tem is not restricted to the problem of finding candi- date frames, but is also used in choosing the right frame, relating new input to existing frames, and in going from an abstract to a more concrete frame. It is a general answer to the problem of “choosing (or not choosing) from several possibilities.” The concept of spreading activation is not used to actually determine what frames to use, or to make inferences, but only to suggest possible frames. Deci- sions are made by a more discrete process. As a final remark, we note that the hierarchical approach can find the store-shopping frame for the following text, while the rithm would not be able to: spreading activation algo- As Jack walked down the aisle he picked an object off the shelf. 2. choosing the Right Frame Its easy to choose among candidate frames when there are contradictions to rule out all but one candi- date. The hard part is weighing the merits of several frames, none of which are obviously incorrect. Con- sider these two sentences (originally presented by Paul Kay): (a) The florist sold a pair of boots to the Balerina. (b) The cobbler sold a pair of boots to the Alpinist. In (a) we want to choose a default prototypical selling frame. However, in (b) we would be remiss if we didn’t choose the commercial-transaction frame; we should make the connection that the boots are from the cobbler’s shop, that the selling is done there, and that the Alpinist will probably use them in his avocation. This interpretation is preferred because it is richer than the interpretation for (a); it ties together more information. FAUSTUS tries to find interpretations that account for all of the input, but it does not have any more sophisticated decision rules. A delayed way to rnake a choice is through attri- tion, the process that drops frames out of active con- sideration over time. Older frames with lower initial activation are the first to go. Thus, if we are faced with a choice of five candidate frames, and are unable to choose between thern, eventually the candidate frames will start dropping out. Finally we will be left with only one frame; the one with the highest initial activation. This is choice by default. There is some evidence that such choices are reinforced over time. Consider the text: Doctor Smith entered the operating room. She was the best open-heart surgeon the Med- ical Center ever had... When FAUSTUS reads the word “she,” the representa- tion for Doctor Srnith is altered to record the fact that she is female. This overrides the default, but causes no problems. However, if the text were part of a novel, and twenty pages had elapsed between the time Doctor Smith was introduced and the time the doctor was referred to as “she,” why might the reader then be surprised? I submit it is because the candidate frame female-doctor dropped out by attri- tion, and the male-doctor frame (with its higher a priori activation) was chosen by default. At this point it cannot be changed without noticing a contradic- tion. Paul Kay [Kay811 discusses a similar example. 3. Relating New input to Existing Contexts Many inputs should not instantiate new frames, but rather fit in as part of an existing instantiated frame. In [Wilensky78] it was proposed that the understander should first try to interpret the input as elaborating an existing frame, and if that fails to try to find a new frame. In [Wong81] the algorithm is to consider simultaneously any existing frames the input might match and a novel frame created on the spot. 285 Orthogonal to those two algorithms is the choice of where to look first. One possibility is to look in all currently activated frames for unfilled slots that match the current input. This may be slow if there are many active frames, and if we have to go through them linearly. The alternative is to look first in the generic knowledge data base (as we did to find candi- date frames in the first place) and then if we do find a match, check to see if there are any frames of the proper type currently instantiated. 4. Recovering from an Incorrect Choice One mark of a good story understander is the ability to back up, replacing an incorrect assumption with an alternative when the story merits it. Granger [GrangerEIl] h as worked on a story understander (and a question answerer) with this ability. The problem is really three-fold: recognizing that an error has occurred, locating the error, and correcting it. The hard part is locating the error. It would be easy if each frame had a list of concepts that are not part of the frame, but obviously we can not afford to store that kind of knowledge. FAUSTUS makes the third task easier by keeping candidate frames around (in a separate data base) for a short time after they are discarded as inappropriate. This makes the problem of selecting a new frame identical to the problem of selecting an old one, provided the candidate frames have not yet been lost through attrition. Consider the following text: As Jack walked down the aisle he picked up a can of tuna fish and put it in his basket. As a janitor at the stadium, he had to pick up a lot of trash after a big game like this one. The reader decides on the supermarket-shopping frame after the first sentence. In the second sen- tence the location of stadium conflicts with the loca- tion supermarket. FAUSTUS then backs up, com- pares the supermarket-shopping scenario with the stadium-janitor scenario, and determines that only the later scenario accounts for all the inputs. Notice that the ability to back up requires meta-knowledge of what knowledge was acquired through input, through inference, or through default. As discussed above in problem 2, the distinction between default and input probably becomes lost over time. 286 Acknowledgements Many of the ideas expressed here were developed jointly with Joe Faletti, Marc Luria, Robert Wilensky, and other members of the Berkley AI Research group. Joe Faletti and I also worked together on a prelim- inary implementation of the frame manipulation package; this package was shared by FAUSTUS and PANDORA, Faletti’s problem solving program. References CharniakBZ. Charniak, Eugene, “Context Recognition in Language Comprehension,” pp. 435-454 in Stra- tegies for Natural Language Processing, ed. Mar- tin H. Ringle,Lawrence Erlbaum Associates, Hillsdale, NJ (1982). Dyer82. Dyer, Michael G., “Affect Processing for Narra- tives,” Proceedings of the National Conference on Artificial Intelkigence, pp. 265-268 (1982). GrangerEl. Granger, Richard H., “Directing and Re-Directing Inference Pursuit: Extra-Textual Influences on Text Interpretation,” Proceedings of the Seventh IJCXI, pp. 354-361 (1981). Granger82. Granger, Richard H. and Jennifer K. Holbrook, “Perseverers, Recencies and Deferrers: New experimental evidence for multiple inference strategies in understanding,” ??, Information and Computer Science, U.C. Irvine, Irvine, CA (1982). Jacobs83. Jacobs, Paul, “Generation in a Natural Language Interface,” Proceedings of the 8th IJCAI, (1983). Kay8 1. Kay, Paul, Three Properties of the Ideal Reader, Berkeley Cognitive Science Program (1981). Lehnert82. Lehnert, Wendy G., “Plot Units: A Narrative Sum- marization Strategy,” pp. 375-4 14 in Strategies for Natural Language Processing, ed. Martin H. Ringle,Lawrence Erlbaum Associates, Hillsdale, NJ (1982). Norvig83. Norvig, Peter, “Frame Activated Inferences in a Story Understanding Program,” Proceedings of the 8th IJCAI, (1983). Rosch78. Rosch, E., “Principles of Categorization,” in Cog- nition and Categorization, ed. B.B. Lloyd,Lawrence Erlbaum Associates (1978). Rumelhart75. Rumelhart, David, “Notes on a schema for stories,” pp. 21 l-236 in Representati,on and Understanding, ed. A. Collins,Academic Press, New York (1975). Schank77. Schank, Roger C and Robert P. Abelson, ‘I ” in Scripts, Plans, Goals and Understanding, ’ Erl- baum, Hillsdale, N.J (1977). Wilensky80. Wilensky, Robert and Yigal Arens, “A Knowledge- based Approach to Natural Language Process- ing,” Memorandum No. UCB/ERL/M80/34, UC Berkeley ERL, Berkeley (1980). Wilensky78. Wilensky, Robert W., Understandin,g Goal-based Stories, Yale University Corn uter Science Research Report, New Haven, CT. P 1978). Winograd72. Winograd, Terry, Understanding Natural Language, Academic Press, New York (1972). Wong81. Wong, Douglas, “Language Comprehension in a Problem Solver,” Proceedings of the Seuenth IJCM, pp. 7-12 (3 981). 287
1983
83
281
An Analysis of a Welfare Eligibility Determination Interview: A Planning Approach Eswaran Subrahmanian School of Urban and Public Affairs Carnegie-Mellon University Abstract The purpose of this research is to identify strategies and plans used by welfare caseworkers in order to build an expert system to determine welfare eligibility. Here we use the framework of proposed by Hobbs and Evans(1979) to analyze the conversational plans of a welfare caseworker while conducting an eligibility interview. We study the plans used by the caseworker, show the interaction between goals and themes, and study the influence of constraints imposed on the interview by the statutory Welfare Eligibility Rules. We identify some of the pre-structured plans in this constrained conversational domain in our effort to define the range of choices available in a welfare eligibility interview. 1. Int reduction In this research, we are examining the plans and strategies of welfare caseworkers in order to bui,ld an expert system to determine welfare eligibility. Here we analyze the protocols of one such caseworker in an interview with a simulated client to study the structure of the plans as provided by the rules for determining eligibility as well as those arising from the social/world knowledge of the caseworker. Research in Artificial Intelligence has often focused on planning as a means of facilitating problem solving in a variety of domains (eg. Newell & Simon(1963), Waldinger(l981), Cohen & Perrault(1979). In this approach a problem solver is believed to generate a plan as a means of achieving goals implied by a task; planning can be viewed as a task of organizing actions to achieve a goal or a set of goals. Hobbs and Evans(1979) have proposed a scheme for studying the planning approach in conversational settings in terms of typical goals, conversational strategies, constraints on the choices of actions, and the operation of the planning mechanism itself. In our analysis of the protocols of a welfare eligibility interview, we utilize the framework proposed by Hobbs and Evans. This approach with the theory of coherence presented by Agar and Hobbs(1982) provides us with a method to identify conversational goals and strategies and their interaction within a domain through the emphasis on the maintenance of coherence at different levels of the conversational plan. In section 2, the terms used in the analysis are defined. Section 3 is devoted to explaining the task environment of the interview. In section 4, we present the analysis of the interview and we provide excerpt from the protocols to highlight the analysis. The final section presents the conclusions and their implications for further research. 2. Planning Framework for Analysis of Dialogue The planning mechanism as proposed by Hobbs and Evans( 1979) -- from an Artificial Intelligence perspective -- consists of a) a representation language, b) goals, c) actions, d) causal axioms and, e) a planning process. In a domain such as the one addressed in this paper conversation as a planned activity is somewhat constrained by virtue of the statutory framework of Welfare rules, the influence of which is recognizable in the analysis of the interview. In their model, they further suggest a structure for analysis of conversations using the planning mechanism. They identify areas of research within the unified framework of the Planning Approach. They are: a) Identification of goals. Following Halliday (1977)and Grosz (1979), the goals have been classified as 1) domain goals - goals external to conversation; 2) textual or discourse goals -- conversational goals such as ease of hearer’s understanding which inciude coherence goals and 3) interpersonal or social goals -- goals consistent with the role the speaker has chosen to play in the overall society as well as in individual social settings. In this paper only the interaction of coherence goals and domain goals are considered. b) Description of conversational strategies: Even though conversational strategies may be unique to individuals, a rule- based domain imposes constraints on the set of conversational strategies and thereby making the identification of these strategies less complex than in a free flowing dialog. Before we proceed to define the goals in our domain we shall to introduce the types of coherence goals presented as a formal theory of coherence by Agar and Hobbs (1982). Coherence has been divided into three kinds: the relation of an utterance to the overall plan has been termed as global coherence: coherence in terms of relationship between successive utterances is termed as local coherence: content oriented coherence which occurs repeatedly throughout a conversation is termed themal coherence. In our analysis we identify domain oriented themes that are defined by the organization of the rules and thus operate as constraints in the conversation and we study the interaction of the themal, local and global coherence goals with domain goals, 3. Task Environment The task domain of a case worker in a welfare office is essentially characterized by the set of rules for various welfare programs. In determining eligibility, the caseworker brings to bear hundreds of rules concerning the rights and responsibilities of individuals; the welfare agency and specific eligibility criteria From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. that define categorically needy citizens. The rules provide the from his job as roofer/painter a month ago. The family owns its caseworker with the flexibility of integrating his view of the welfare own home. The male parent has been doing odd jobs and earned system, the people who apply for assistance and the objectives of $200 since he was fired. He was paid in cash for the odd jobs. He the agency as specified in the rules for eligibility. To take into does not receive unemployment compensation or any account the possible world views of the caseworker and to employment related compensation. Two of the children go to identify the corresponding interpersonal or social goals in the school and the mother cares for the youngest child. Previously, analysis of the interview is beyond the scope of the paper. Hence the husband used to work with a roofer in another town. They we restrict ourselves to defining domain goals in our task have $25 on hand, $200 in the savings account, and $125 in the environment. checking account. There is no disability in the household. 3.1. Domain Goals and Themes The domain goals in the overall plan for determining eligibility can be divided into five high level goals: a) collection of data pertaining to eligibility; b) verification of the collected data: c) explanation of rights and responsibilities of the client; d) calculation of benefits; e) filling out the form. However, the goal “Calculation of benefits” may be dropped in the case where the client becomes ineligible while satisfying the other goals. It is worthwhile to point out that the goals presented here pertain only to the initial application process that involves the application interview. The dominant themes that can be identified from the organization of the rules are: a) Client Unit data: Data on each legal member of the Client Unit such as social security number, residency, names, birth dates, etc..; b) Verification: All data must be verified to the point of credibility and the client shall be required to substantiate the information provided; c) Resources: All resources of the client past, present and future should be explored as needed. d) Employment: Employment past, present and future as well as whether exempt from employment due to condition of the client. e) Rights and responsibility of the client: concerns the legal rights of the clients in terms of disclosure of information, rights to appeal and responsibilities of the client to report changes in the data furnished to the agency. In the next section our purpose is to see how each of these themes interact with the domain goals and with the types of coherence goals. We do not present the entire interview data in this paper due to space limitations. Rather, we present for analysis extracts of conversation that are characteristic of the interview. The segments chosen for detailed analysis represent two different sets of domain goals: in segment 1, the caseworker’s goal is to collect data and to devise plans to verify the data; in segment 2, the caseworker’s goal is to effect the client’s understanding of the rules. They also represent two different conversational strategies: in segment 1, the caseworker questions the client following an overall prestructured plan based on the rules, but the local coherence of questioning is dependent. on client’s responses; in segment 2, the caseworker explains the rules using a prestructured plan and elaborates at the level local coherence only parts of the conversational plan that failed in effecting the client’s understanding. 4.2 Analysis: Global Coherence and Domain Goals The position of different segments in the global interview plan (Fig. 1) illustrates the global coherence of the segments. In this interview the global interview plan is similar to the structure of the application form. However, there may of course be caseworkers who prefer not to use the form for structuring the interview. 4. Analysis of Data 4.1 Background Information The protocols used in the analysis were collected from an interview conducted by the caseworker on a pseudo client. The characteristics of the simulated client household are as follows: A household of five with three children. The male parent was fired Given that the caseworker has chosen to use the form, she is not only achieving the goal of filling the form but also providing herself with a prestructured plan for the domain goals of “collection of data” and “verification ‘I. In the case of the “verification” goal the form only provides cues to what pieces of data have to be verified and to other information relating to verification, such as date of verification. 4.3 Analysis:Themal Coherence In a rule-based context such as welfare eligibility, we find Eligibility determination Data collection/ history others Segment-2 applying unit* Segment-l *the query plan in the segments under these headings follow the Form figure 1. Global Interview Plan. 399 that themal organization of the rules and the form play a very major role in the maintenance of themal coherence in the interview. The themes “Personal data,” “Employment,” “Resources,” and “Rights and Responsibilities” are dealt with in a compartmentalized manner. The deviations are due to overlapping of themes as in the employment related resources where both the themes “Employment” and “Resource” occur together within the “employment” theme. Verification is the only theme that occurs over and over again in different parts of the interview interspersed with other themes. ’ The theme of “verification” is an implicit theme as the caseworker in order to achieve the goal of verification, devises plans to acquire known forms of verification documents available for a given piece of data. In certain other situations the caseworker also devises multiple plans to achieve the verification goal. Here, the goal of verification of all data collected is emphasized to the client by the caseworker devising plans with the help of the client when possible. “Verification” is unique in this domain as it appears both in the form of a goal as well as a theme. In the segments of conversation that are analyzed in the next subsection we will illustrate how verification operates as both a goal and a theme. 4.4 Analysis: Local Coherence in Interview Segments Analysis: Segment 1 1.1 CW: There are bunch of questions are on employment. 1.2 Are you employed part-time full-time, self employed or you do 1.3 odd jobs. 1.4 CL: I have odd jobs. 1.5 CW: Odd jobs. 1.6 The reason I go through deeply into specifying the type of 1.7 employment is I am concerned with because I have had clients 1.8 who on asking are they employed? will say 'no'. 1.1; I have gotten explicitly to specifying. 1.12 You said you employed doing odd jobs. How often do you do odd jobs? 1.13 CL:Oh. maybe lo...15 hours a week. 1.14 CW:a week. Is it the same place you do these jobs. 1.15 CL:no, I fix cars and paint..... : 1.18 CW:Is it at your home you fix cars or do you work for a natural business. 1.19 CL:no, I fix in my house or I may fix at their house. 1.20 CW:how much work have you had this month? 1.21 Cl:this month, about 15 hours. 1.22 CW:How much do you take per hour? 1.23 CL:1 usually charge people by the job. 1.24 CW:by the job. ok. How many actual jobs have you done this month? 1.25 CL:ahm three or four 1.26 CW:whom did you done the jobs for? 1.27 CL:Christopher Jones. 1.28 CW:his address? 1.29 CL:somewhere in Rosewood blvd. 1.30 CW:do you have phone number for him. 1.31 CL:1 guess. I don’t have a phone number for him I . he lives about 5420 Rosewood blvd. 1.32 CW:who else did you do jobs for? 1.47 CL:none else. 1.48 CW:that's it. what kind of work did you do for Mr. Jones. 1.49 CL:i fixed his car. he had a muffler taken off. 1.50 CW:how much did he pay you. 1.51 CL:about 40 dollars. 1.59 CL:100 dollars. note: CW: Caseworker , CL: Client) Fig. 2. Transcrint of Protocol z Seament L The segment of conversation (fig. 2) is typical of the introduction of a new theme in the interview. First the theme is declared (l.l)and a query representing the general classification of the question to follow is generated (1.2). The response (1.3)to the general query directs further conversation. In this case the caseworker in (1.4-1.11) declares that she is using a prestructured plan for her query. She uses the prestructured query because of inferences drawn from her previous experiences about the cognitive world of the client. In (1.12)she maintains local coherence by generating queries that are related to the response of the client in (1.3). She explores the frequency and the place of work in (1.12-1.18). She then explores the amount of work (1.20) wage rate for the work (1.22) and number of jobs (1.24). She clearly is expanding on the topic of odd jobs by querying the different attributes that characterize an odd job. In the fragment(l.24) by querying an attribute of the job, she moves towards the plan for verifying the data by collecting information on the employers in each of the odd jobs (1.26-l .47). In order to collect the data on amount and type of work done, she sets up a conversational structure (1.48~150)and repeats it for each of the employers. We represent the local coherence of questioning in Figure 3. This strategy is used in several segments of the interview. She seems to use a prestructured plan to initiate the query under a particular theme. The prestructured plan arises in some background Introd. -Topic of theme Identification tFGc- Topic -verification $4J ‘--, probl[then subtopic subtopic subtopic Solution (query) (wry) OwW 4 query planlt--- Categorization of query query pl an2f +* Query plan1 and query plan2 are plan structures Constructed so as to repeatedly use them to collect data rega.rdi ng i terns under subtopic 3. figure 3. Structure of Seament-1. 400 segments from the caseworker’s knowledge of the cognitive world of the client as in (1.1-l .2) and in other segments plans are constructed to obtain request information in such a way that the structure can repeatedly be used to exhaust the collection of similar data as in (1.43-l 58). 4.4.2. Analysis: Segment 2 2.1 CW: Everybody who is an applicant 2.2 2.3 2.4 2.; 2.10 2.11 2.12 2.13 2.14 2.2; 2.21 2.22 2.23 2.24 2.25 2.3; who goes through this interview must have this form explained to them and they must sign this form saying that they understand what their right to responsibilities are. . . . . the right to appeal anything.. . . It is is your responsibility to provide us with correct information and report all changes within a week. It is extremely important. Any question at this point. I don’t care if you worked half an hour, you have to declare. Whether it is under the table. odd jobs.. If any of these is not reported within a week it is going to result in an overpayment in your assistance grants, which means you are getting more money than you are eligible for. The government is going to want the money paid back and it will prosecute if there is an overpayment. Prosecution means is fining and imprisonment There is no excuse for not reporting that comes out. 2.33 CL:You mean any kind of money? 2.34 CW:Yes, as far I am concerned there is an overpayment. 2.35 CW:I don’t care,whether ‘you sold dope on the streets, that is one of the type of income you have. Figure 6: Transcrint of Protocol : Seament 2 The segment in figure 4 is different from other segments in that the goal is the explanation of the theme of rights and responsibilities of the client. We represent the local coherence of explaining rights and responsibilities of the client in Fig. 5. It is an explicit theme and is announced and elaborated (2.1-2.4). All the rights are first explained (2.5-2.9). The responsibilities of the client are subsequently explained in general terms (2.9-2.11). While soliciting for questions (2.12), she proceeds to “expand” on the types of data that are to be reported (2.13 - 2.19). In the fragment (2.20-2.22) the “consequence” of not reporting and in (2.23-2.32) the “outcome” of such a consequence are laid out in the conversation. The client raises the issue of the means by which the money is acquired (2.33). The caseworker explains with an example (2.34 - 2.35). This segment is a prestructured segment of the interview whose place in the overall interview structure is determined by the nature of the theme. However, the actual structure of this segment may vary depending on the perception of the client by the caseworker. 1st topic 2nd topic Riquest then / for query consequence I then then I outcome explanation by example Fioure L Structure of seament-2, Conclusions The paper analyzes an interview conducted in a domain where the goals of the interview are clearly specified. The application form is prestructured to capture the essential data from the interview and also serves as a summary of the interview v@en completed. The reason for using the form or not using the form during the interview depends on the role the caseworker views himself/herself to be playing in social settings such as the welfare office and hence cannot be accounted for. In this paper, we have observed that, in order to maintain global coherence, the caseworker has adopted a global interview plan provided by the structure of the form. Of particular interest in the analysis is the richness in the planning structure of locally coherent queries in the interview. For example the caseworkers while maintaining local coherence constructs plans that can be used repetitively to collect data that are attributes of a particular object such as the client’s employer. We are currently analyzing protocols of another caseworker for the client case presented in this paper. We are evaluating the protocols for other client cases as well to identify plan structures across cases. However specific these observations may be to this domain, we believe that identification of planning structures at different levels of coherence could contribute towards identifying different prestructured conversational plans in similar rule based contexts. The identification of planning structures within such domains can be used in building expert systems based on the planning approach. 1. Agar, M. & Hobbs, J. ” Interpreting Discourse: Coherence and Analysis of Ethnographic Interviews,” Discourse Analvsis, 1982(l), 1-32. 2. Hobbs, J. & Evans, D. “Conversation as Planned Behavior,” Coqnitive Science, 1980 4 (3) 349-377. 3. Grosz, B. “Utterance and Objective: Issues in Natural Language Communication,” In Proc. JCAI-79. I Tokyo, Japan, August, 1979, pp. 1067-1076. 4. Halliday, M. “Structure and Function in Language.” In T. Givon(ed), Syntax and Semantics, Vol 12, Academic Press, New York,1977. [References incomplete due to lack of space] 401
1983
84
282
A Model of Learning by Incrernenfal Analogical Reasoning and Mark H. Burstein Department of Computer Science Yale University New Haven, Connecticut 08520 Abstract This paper presents a mode! of analogical reasoning for learning. The mode! is based on two main ideas. First, that reasoning from an analogy presented by a teacher while explaining an unfamiliar concept is often determined by the causal abstractions known by the student to apply in the familiar domain referred to. Secondly, that such analogies, once introduced, are extended incrementally, in attempts to ncrount, fcr new situations by recalling additional situations from the same base domain. Protocols suggest that this latter process is quite useful but extremely error-prone. CARL, a computer program that learns semantic representations for assignment statements of the BASIC programming langurrge is dcricribcd as an illustration this kind of analogical reasoning. The motlr! maps and debugs inferences drawn from several commonly used analogies to assignment, in response to prcsentct! examples. It has often been said among AT researchers that learning one new concept requires knowing an enormous amount beforchanc!. In learning by ana!ogy this is particularly true. This pnper outlines an approach to learning by analogy showing one way that the organization of that knowledge can also be importnnt. In developing the model presented here, I concentrated particularly on how analogies presented by a teacher or text might be used by a student in forming an initial mode! of events in an unfamiliar domain. The model was motivated in part by observations derived from recorded protocols of the behavior of several students I tutored in introductory computer programming in the BASIC language. A number of models of analogical reasoning in -41 have been drvcloped around forms of partial pattern matching, under which objects in one domain are first associated with those in another, and ordered relations or frames with case slots of some kind are then placed in correspondence [Evans 08, Hrown 77, M’inston 80, Winston 821. These algorithms were based on the assumption that a best partial match could be found by first accumulating evidence for al! possible object-to- object mappings between two situations and then choosing the one that placed the largest number of attributes and relations in correspondence. Tlris approach has several major drawbacks in a mode! of anaIn,cical Icarning. First, it presupposes that we!! defined, bounded representations exist for the situations in both the base or familiar domain and in the target domain. The partial mz:trhing mode! thus requires some prior representation of the objects and relations in the target domain when, in fact, there may not be any such reprcseutation when the analo,gy is first The work reported here was supported in part by the Advanced rLpscarch Projects Agence of the Department of Defense, and monitored by the Office of Naval Research under contract No. NOO014-75-C-111. Debugging prcscntcd. Equivalently, the st,udcnt’s knowledge of the tiomnin lnay be wrong or inconsistent with the analogy, making matching difficult or useless. The point of presrnting an ana!oay to a student is to aid him in coTzatrerc2ing a representation of a target situation, or to correct problems in a prior representation. Since a poor match does not indicate how such a representation might be constructed, it cannot form the basis of a general theory of how students learn by analogy. A related problem with theories of analogical reasoning based principly on description matching is that rich situation descriptions often contain many object,s and re!at.ions not taking part in analogies to those situations. As the amount of detail in the representation of a situation increases, the combinatorial complexity of a bottom-up analogical matching process also dramatically increases. Larger and larger numbers of extraneous object-object mappings must be tried and discarded. Winston [Wmston 80, Winston 821 suggested that att,ention to important relations, such as those connected by causal links, can reduce the computational complexity of the matching process to some degree. Yet, even in strictly causal models, sub-systems can quite often be expanded to greater !evc!s of detail [deK!eer and Brown 81, Collins and Gentner 821, thereby introducing objects which not playing direct roles in analogies based on those systems. What. is required to address these objections is an approach based on analogical mappkng that includes a set of strong heuristics for delimiting what is to be “imported” from the base to the target domain. One important way the analogical mapping process can be usefully constrained is by focusing attention on the abstractions embodying the inferences made when first interpreting a presented base domain situation. The hierarchical structure of these representations can help in determining a useful mapping without prior specification of all of the objects potentially involved in complete descriptions of two related situations. It has been argued [Schank and Abelson 77, Schank 821 that abstract causal, plan and goal structures play an important role in the interpretation of natural language texts. If this is the case, then they must also be available when int,erprcting texts containing analogies. Winston’s system used causal links to guide its matching process, but this generally occurred only if those links were were present in the input representations. At no time did the system look at the rules supporting represented causal links to find features or relations that. might, have useful correspondences. I suggest that the causal abstractions underlying represented base domain situations provide important constraints on the analogical mapping process, particularly when little or nothing is known of the tarcot, domain. OG 45 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. Many aspects of this approach to analogical learning are illustrated the behavior of a computer program, CARL, that attempts to learn about various kinds of assignment statements by building conceptual models drawn from several different analogies. CARL makes a number of the errors I found my students made as they tried to use the same analogies [I3urstein 811. The following is the English equivalent of one portion of a dialogue that CARL engages in (actual inputs are in Conceptual Dependency [Schank and Abelson 771) : :> The computer remembers a number by putting it in a variable. >> A variable is like a box. >> To put the number 5 in the variable named x type 'X=5'. >> liow do you get the computer to remember the number 9? CARL: Type X = 9 >> Suppose I then type 'X=6'. >> P:ow what's in the variable X? CARL: 6 is in X and 9 is in X >> No, the 9 replaces the 6 in X. CARL: OK >> Suppose I type 'Y=5' and then 'X=Y'. >> Now what's in the variable X? CARL: Y is in X >> No, it means put the number in Y into X. CARL: OK >> Now what's in the variable Y? CARL: Nothing. >> Guess again. CARL: 5 is in Y (by analogy to algebraic equality) >> Correct. CARL: OK In this example, CARL is first introduced to assignment primarily in terms of an often used analogy between variables and boxes. The initial statement of the analogy includes both a statement of the principle object class correspondence (i.e. box::variable), and an example of an assignment, described in terms of a parallel “box domain” situation. These statements taken together provides CARL with enough information to form an initial description of the relation between variables and numbers formed by assignment, principly that this new relation is “like” physical containment in some specific ways. CARL concludes that the relation between the variable X and the number 5 results from the assignment act,ion requested by typing “X=5”, in the way “the object is in the box” results from the action “put an object in a box”. Most attributes of boxes, physical objects, and many possible relations between the two are completely ignored in this process. Other consequences of this analogy, including many erroneous ones, are discovered during further exercises in the new domain. As new examples are presented, other, related situations are retrieved from CARL’s memory of the “box” domain, the mapping process is repeated, using the object and predicate correspondences formed initially. Errors are discovered and corrected either by explicit statements of the tutor, or, when possible, by the use of internally generated alternative hypotheses. CARL uses several other analogies in this process, including the similarity between assignment and equality, and the common belief that computer actions mimic human actions. As in the above example, secondary analogies are used to discover alternate hypotheses about the effects of statements when errors are detected. When a class of assignment statements is successfully modeled by a structure formed in the mapping process, parsing and generation rules are also associated with the new structure. In this way, CARL learns to manipulate most common forms of assignment statements. 2 An Initial Structure Mapping Theory The mapping algorithm developed for CARL was influenced by a model of analogical reasoning called structtsre mnpping, developed for a series of psychological studies Gcntner [Gentner 82, Gentner and Gentner 821. This model was used to describe the effects of human reasoning when given scientific or “explanatory” analogies, such as those below. [Gentner 821 The hydrogen atom is like the solar system. Electricity flows through a wire like water through a pipe. The mapping algorithm Gentner proposed for reasoning from analogies of this type circumvented some of the problems with match-driven models in that it did not require a prior representation of the target domain. However, it was underspecified as a cognitive process model. By her model, first-order relations, or predicates involving several objects or concepts, are mapped identically from one domain to the other under a prespecified objectcobject correspondence. After identical first-order relations have been used to relate corresponding objects in a ta.rget situation, second-order predicates, such as causal links between first-order relations, were also mapped. While this does suggest a way to map new structures into an unfamiliar domain, it does not give a good account of how corresponding objects are first identified, nor does it constrain which relations are mapped. It also does not allow for mappings between non-identical relations, which I will argue is often necessary. The need to constrain the set of relations mapped can be seen from Gentner’s representation of the solar system model, and the mapping that her system suggests to a model for the atom. In that representation, the sun is related to a planet by the predicate HOTTER-THAN, as well as the predicates ATTRACTS, REVOLVES-AROUND and MORE-MASSIVE TIIAN, three relations which are causally linked in the description of the orbiting system. Gentner claimed that the IIOTTER-THAN relation was not mapped to the atomic model, in accord with most people’s intuitions. However, her formal mapping algorithm could not predict this. Many other attribute comparisons, such as RRIGIITER-TIIAN, could also have been present in a description of the solar system. Presumably, these rela,tions would not be mapped either. The explanation given for this phenomenon was in terms of a general condition on the mapping process she termed systemnticity. This condition is essentially that “predicates are more likely to be imported into the target if they belong to a system of rohercnt, mutually constraining relationships, the others of which are mapped.” [Gentner and Centner 821 In CARI,, this condition appears as the top-down heuristic delimiting the relations considered for mapping. When a causally connected structure is in memory to describe a base domain situation, only relations taking part in that structure are considcrcd for mapping. Within that set of relations, simple attribute comparisons like IIOTTER-THAN, LARGER- TIIAN are not mapped if no corresponding attributes can be found in the target domain. The result of mapping a causally- connected structure under these conditions is a new, parallel causal structure cbaracterilin g the target example. Objects in the target example are made to fill roles in that structure using stated object correspondences between the domains when available, and otherwise by their appearance in roles of actions and other relations with objects that do have known correspondences. Thus, in the analogy between variables and boxes, an indirect correspondence is formed between the situational role of a physical object that is INSIDE a box, and a number assigned to a variable. This causally directed mapping process thus forms new 4 Overview of CARL’s analogical reasoning structures in domains where none existed before, while allowing process relations to be mapped with some consideration of what is CARL develops simple causal or infcrenti31 structures in known of the objects and relations in the target domain. The the target domain by retrieving structures in memory for the process is also top-down in that classes of objects are only familiar domain, and adapting them to the new domain, using placed in correspondence across domains explicitly by statement a top-down mappiog process that preserves the causal/temporal of the teacher, or, indirectly, during mapping by the links explicitly specified in those structures. In the retrieval appearance of an object in a role of the target structure where process, base domain objects arc substituted for target domain a different object was described in same role in the base objects whcu those correspondences can bc determined from the domain structure. prcscntcd description of the analogy. The mapped predicates are subject to transformation within an abstraction hierarchy, as described above. Subsequent use of the same ana]o,gy may 3 Mappings between non-identical relations either be for the purpose of mapping a related structure - a Another problem with both Winston’s and Gentner’s related type of action situation, or a more context-specific models of analogical reasoning can be found in the claim that version of the originally mapped structure - using t,he mapping all relations are mapped “identically” from one situation developed initially, or an extension of that mapping to inciude description to another. Winston’s matcher embodied this claim new predicates. since it only found correspondences between identical predicates The structure first mapped when CARL is given the box in two representation. This condition may apply in analogies analogy is a simple conceptual description of the action of where the domains overlap or are closely related, as in the putting an unspecified object in a box, together with the standard geometric analogies dealt with by Evans, and many standard rcsnlt of that action, that the object is then INSIDE scientific analogies. However, it is much too strong a claim in that Lox. The result relation INSIDE is transformed in the general. When analogies are formed between physically mapping process to a new relation which I will refer to here as realizable situations and purely abstract ones, like those of INSIDE-VRFKABLE. The action PTRANS representing a mathematics or computer programming, it is impossible to change of physical location is replaced with the more general maintain the “identical predicate” mapping position. predicate TRANS, indicating any state change, and causally connected to the result, that the OBJECT of the TRANS is Probably the most important thing implied by the analogy INSIDE-VARIABLE after the action is completed. The between boxes and variables is that variables can “contain” standard precondition on putting an object in a box, that the things. That is, the relationship that exists between a box and object fit in the box, is ignored since neither variables nor an object inside the box is, in some ways, similar to the numbers are known to have a physical SIZE. relationship between a variable and the number associated with that variable. The important inference that must be preserved in the mapping is that since one can put things in boxes, then 5 Incremental analogical reasoning for Beaming t,here must be a way to “puut” numbers ‘%?I” variables as well. Even when analogies are based on simple actions, the The problem from the standpoint of Gentner’s model, is specific inferences to be made may vary considerably depending that the relationship which gets mapped from the “box world” on the context in which th- action occurs. For example, to the “computer world” is precisely that of physical throwing a rock at a brick wall and throwing one at a glass rontainment. That is, the interprela.tion that results from window are known to have very different. consequences. copying this relation into the programming domain is that a Tbough an analogy to a thrown rock might imply indirectly number is physically INSIDE a variable. Unfortunately, not all that the specific inferences valid in such alternate contexts will of the inferences which the relation INSIDE takes part in apply apply in some target domain situation, in practice each such to numbers “in” variables. For example, there is no primitive situation must be explored to determine the true extent to action in BASIC that corresponds precisely to the action “take which the analogy is valid. Extending analogies in this fashion out of”. CARL determines that target relation may have some is an error-prone process as ofttcn as it is useful. CARL differences from the physical relation INSIDE by noting that attempts to extend an analogy to such alternate-context numbers violate a typical constraint on the object slot of the inferences only as they arc required to interpret new situations INSIDE relation. Since a number is not a physical object, it is in the target domain, making a number of errors in the process. tiot in the class of objects normally “contained” in other objects. Thus, the relation suggested between variables and In the protocols I examined, these errors appeared only numbers cannot be the standard notion of containment, but when the context in the target domain made them potentially instead must be some (as yet undetermined) analogical useful inferences. For example, when a statement like “X=Y” extension of that relationship. was first introduced, it was necessary to explain that this meant X was given the value that Y had previously. One student When an attempt to map a relation directly results in such then inferred that Y was must contain “nothing”, since that a constraint violation, a virtual relation is formed in the target was what happened with boxes. Finding and correcting these domain tbat is a “sibling” of the corresponding base domain special-case errors in the inference rules mapped is treated here relation, or an ancestor at some higher level in is-a the as an incremental process that requires observation and hierarchy of known relational predicates. The constraints consideration of a number of examples in t,be new domain. placed on the roles in new virtual relations are determined primarily from the classes of the objects related in the target This behavior is modeled in CARL by retrieval and domain example. Thus, from the analogy to boxes, a new mapping of related causal structures from a base domain (the prctlicate INSIDE-VARIABLE is formed to relate variables and box domain in this case). One interpretation found for “X=Y” their “contents”, initially constrained to be numbers. in terms of the box analogy is that it is like moving an object Inferences are associated with this new relation as they are from one box to another. Information saved about the successfully mapped from the base domain, learned mapping of the prototype “put an object in a box’ is used both indcpcntlcntly in the new domain, or inherited from other in finding the structure representing this more specific action and subsequently to map it back to the programming domain. ,analogics. CARI,‘S memory organization for knowledge of simple action-based domains involving familiar objects is in part an extension of an object-based indexing system developed by Lchnrrt [I,thncrt 78, Lehnert and Burstcin 791 for natural language processing tasks. So that CARL can retrieve a variety of special case situations, the retrieval process was augmented using a discrimination system as in the specification hierarchy model of episodic memory used by Lebowitz and Kolodner [Lcbowitz 80, Kolodner 80). Precondition/result-based indexing was also added so that actions and simple plans could be retrieved in response to requests to achieve specific goals. Any of these forms of indexing may be used in finding a suitable structure to map. For familiar domains, the system assumes a large set of fairly specific generalized situations exists, detailing the causal inferences expected in each case. As an example, CARL contains the following structures describing the effects of some simple actions involving containers. Sltuatlons using BOX as a CONTAINER: (PUT-IN-BOX OEJ-IN-BOX TAKE-FROI~-BOX) I 1 L c TRANSFER-OBJ-BETWEEN-BOXES PUT-BOX-IN-BOX b PUT-MORE-IN-BOX SWAP-OBJ-IN-BOXES Part of the specialization netuork for things “INSIDE” Once an initial mapping is formed from the causal structure PUT-IN-BOX, specializations of t,hat structure are available when new examples presented to CARL. Also, once the containment relation is formed for variables, expectations are established for bhe other “primitive” situations involving containers. Thus, from the fact that variables can “contain” numbers, it is espected that they can be “put in” and “removed”. The result of mapping these additional structures is the formation of a corresponding set of structures for assignment statements. Many of these new structures contain erroneous inferences, which are “debugged” locally, by observation of the actual results entailed, or simply thrown out. Information for recognizing each structure, in this case parsing and generation information for program statements, is attached to each new structure during the analysis of the examples that caused them to be mapped. Corrections propagate downward from the initial prototype formed, to the variants mapped subsequently. Thus, for example, the fact that ‘X=5’ removes old values of X also automatically applies to ‘X=Y’. In developing CARL, I have been concerned with a number of related issues in the learning of basic concepts in a new domain by a combination of incremental analogical reasoning from multiple models. I have tried here to motivate the need for top-down use of known abstractions in this process. This was found to be necessary both to limit the analogical reasoning required to form some initial concepts in the new domain, and to allow for incremental debugging of the many errors that, can result from the use of analogies. The process described here is heavily teacher-directed, but allows for fairly rapid development of a working understanding of basic concepts in a new domain. Acknowledgements: I would like to thank Dr. Chris Riesbeck and Larry Birnbaum for many helpful comments on drafts of this paper. References 1. Brown, Richard. Use of Analogy to Achieve New Expertise. Tech. Rcpt. 403, M.I.T. A.I. h4emo, hlay, 1977. 2. Burstein, Mark Ii. Concept Formation through the Interaction of h4ultiple hlodels. Proceedings of the Third Annual Conference of the Cognitive Science Society, Cognitive Science Society, August, 1981, pp. 271-274. 3. Collins, Allan and Gentner, D&e. Constructing Runnable Mental Models. Proceedings of the Fourth Annual Conference of the Cognitive Science Society, Cognitive Science Society, August, 1982, pp. 86-89. 4. de Kleer, J. and Brown, ,J. S. hlental h4odels of Physical h4echanisms and their Acquisition. In CognitPtre Skills and Their Acgele’sition, Anderson, John R., Ed.,Lawrrnce Ehrlbaum and Assoc., IIillsdale, NJ, 1981, pp. 285-309. 5. Evans, Thomas G. A Program for the Solution of Geometric Analogy Intelligence Test Questions. In Sernatztic Information Processing, Marvin L. Minsky, Ed.,M.I.T. Press, Cambridge, Massachusetts, 1968. 8. Gcntncr, Dcdrc. Structure Mapping: d4 Thecretical Framework for Analogy and Similarity. Proceedings of the Fourth Annual Conference of the Cognitive Science Society, Cognitive Science Society, August, 1982, pp. 13-15. 7. Gentncr, D. and Gentner, D. R. Flowing waters or teeming crowds: hlental models of electririty. In Mental Models, Gentner, D. and Stevens, A. L., Ed.,Lawrence Erlbaum Associates, Hillsdale, New Jersy, 1982. 8. Kolodner, Janet L. Retrieval and Organizational Strategies in Conceptual Memory: A Computer hlodel. Tech. Rept. 187, Yale University. Department of Computer Science, 1980. Ph.D. Dissert,ation 9. Lebowitz, M. Generalization and Memory in an Integrnted Understanding System . Ph.D. Th., Yale LJnivcrsity, October 1980. 10. Lehnert, W. G. Representing Physical Objects in Memory. Tech. Rept. 131, Yale I’nivcrsity. Department of Computer Science , 1978. 11. Lchnert, W.G. and Burstein, h4.H. The Role of Object Primitives in Natural Languag, Processing. Prorcedings of the Sixth International Joint Conference on .4rtificial Intelligence, IJCAI, August, 1979, pp. 522-524. 12. Schank, R.C.. Dynamic memory: A theory of learning in computers and people. Cambridge University Press, 1982. 1.3. Schank, R.C. and Abelson, R.. Srr%‘pts, Plans, Goals and Understanding. Lawrence Erlbaum Associates, Iiillsdale, New Jersy, 1977. 14. Winston, P. Learning and reasoning by analogy: the details. Tech. Rept. 520, M.I.T. A.I. hlemo, A4ay, 1980. 15. Winston, P. n Learning new principles from precedents and exercises.” Artificial Intelligence 19 (1982), 321-350.
1983
85
283
THEOPTI TYOFA'RMSITED' Rina Dechter and Judea Pearl C%%%F ~~~D$brat~%, University 0 P California, Los r eles, CA 90024 A..BsmAcT This paper examines the optimality of A*, in the sense of expanding the least number of distinct nodes, over three classes of algorithms which return solutions of comparable costs to that found by A*. We Arst show that A* is optimal over those algorithms guaranteed to And a solution at least as good as A*‘s for every heuristic assignment h. Second, we consider a wider class of algorithms which, like A*, solution. (i.e., are guaranteed to And an optimal admissible) if all ccst estimctes are optimistic (i.e., hlsrh*). On this class we show that A* is not optimal and that no optimal algorithm exists unless h is also consistent, in which case A* is optimal. Finally we show that A* is optimal over the s’ubclass of best-first algorithms which are admissi- ble whenever hSh*. 1. INTRODUCTION AND PRE INARIES 1.1 A* and Informed Best-First Strategies Of all search strategies used in problem solv- ing, one of the most popular methods of exploiting heuristic information to cut down search time is the informed best-first strategy. The general philoso- phy of this strategy is to use the heuristic informa- tion to assess the “merit“ latent in every candidate search avenue, then continue the exploration along the direction of highest merit Formal descriptions of this strategy are usually given in the context of path searching problems, a formulation which represents many combinato13ial problems such as routing, scheduling, speech recognition, scene analysis, and others. Given a weighted directional graph G with a distinguished start node s and a set of goal nodes r, the optimal path problem is to find a lowest cost path from s to l? where the cost of the path may, in general, be an arbitrary function of the weights assigned to the nodes and branches along that path. By far, the most studied version of informed best-first strategies is the algorithm A* (Hart, Nils- son and Raphael, 1968) which was developed for additive cost measwes, i.e, where the cost of a path is defined as the sum of the costs of its arcs. TO match this cost measure, A* employs a special addi- tive form of the evaluation function f made up from the sum f (n)=g (n)+h(n), where g(n) is the cost of the currentlv evaluated path from s to n and h is a -- *Supported in part by NSF Grant No. MSC 81-142209 heuristic e&mate of the cost of the path remaining between n and some goal node. A* constructs a tree T of selected paths of G using the elementary operation of node expansion, i.e., generating all successors of a given node. Starting with s, A* selects for expansion that leaf node of T which has the lowest f value, and only maintains the lowest-g path to any given node. The search halts as soon as a node selected for expansion is found to satisfy the goal conditions. It is known that if h(n) is a lower bound to the cost of any continuation path from n to I’, then A* is admissible, that is, it is guaranteed to And the optimal path. ‘I’he optimaMy of A *, in the sense of expand- ing the least number of distinct nodes, has been a subject of some confusion. The well-known property of A* which predicts that decreasi errors h *-h can on1 3 improve its performance 7! Nilsson, 1980, result 6 has often been interpreted to reflect some supremacy of A* over other search algorithms of equal information. Consequently, several authors have assumed that A*‘s optimality is an established fact (e.g., Nilsson, 1971; Martelli, 1977; M&o, 1981; Barr and Feigenbaum, 1982). In fact, all this pro- perty says is that some A* algorithms are better than other A* algorithms depending on the heuris- tics which guide them. It does not indicate whether the additive rule f =g+h is better than other ways of combining g and h (e.g., 1 =g +h2/ (g +h)); neither does it assure us that expansion policies based only on g and h can do as well as more sophisticated best-first policies using the entire information gathered along each path (e.g., f(n)= max [f (n’) In’ is on the path to nl). These two coniectures will be examine d-in confirmation this paper, and will be iiven a qualified Gelperin (19’78) has correctly pointed out that in any discussion of the optimality of A* one should compare it to a wider class of equally informed algo- rithms, not merely those guided by f =g +h, a.ld that the comparison class should include, for exam- ple, algorithms which adjust their h in accordance with the information gathered during the search. His analysis, unfortunately, falls short of consider- ing the entirety of this extended class, having to fol- low an over-restrictive definition of equally- informed. Gelperin’s interpretation of the state- ment “an algorithm B is never more informed than A” not only restricts B from using information una- vailable to A, but also forbids B from processing common information in a better way than A does. From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. For example, if B is a best-first algorithm guided by an evaluation function f~(.), then to qualify for Gelperin’s definition of being “never more informed than A,” B is restricted from ever assigning to a node n an fB(n) value higher than A would, even if the information gathered along the path to n justifies such an assignment. We now remove such restrictions. 1.3 summary of Results In our analysis we use the natural definition of “equally informed,” allowing the algorithms com- pared to have access to the same heuristic informa- tion while placing no restriction on the way they use it. We will consider a class of heuristic algorithms, searching for a lowest (additive) cost path in a graph G, in which an arbitrary heuristic function h(n) is assigned to the nodes of G and is made avail- able to each algorithm in the class upon generating node n. From this general class we will discuss three special subclasses: We first compare the com- plexity of A* with those algorithms which are at least as good as A* in the sense that they return solutions at least as cheap as A*‘s in every problem instance. We denote that class of algorithms by $. We then restrict the domain of instances to that on which A* is admissible, that is, hSh* and consider the wider class of algorithms which are as good as A* only on this restricted domain. Here we shall show that A* is not optimal and that no optimal algorithm e&&s unless h is also restricted to be consistent. Finally, we will consider the subclass of best-first algorithms that are admissible when hSh* and show that A* is optimal over that class. 1.4 Notation and Defhitions G- G - S- r- P- cm - p”c-n - B’d - g(n) - h*(n) - h(n) - c*- c (n,n’) directed locally finite graph, G=( V,E) the subgraph of G exposed during the search start node a set of goal nodes, l’ 6: V a solution path, i.e., a path in G from s to some goal node 7 E l7 the cost of path P A path in G between node 7~ and ni The cost of the cheapest path gorng froms ton The cost of the cheapest path found so far from s to n The cost of the cheapest path going from n to r An estimate of h*(n), assigned to each node in G The cost of the cheapest path from s t0 r The cost of the arc between n and n’, c (n ,n’)a5>0. k (n,n’) - The cost of the cheapest path between n and n’ (G,s ,r,h) - A quadruple defining a problem instance Let the domain of instances admissible be denoted by IAD, i.e.: on which A* is IAD =[(G,s,r.h)ihSh*onGj Obviously, any algorithm in 4p is also admissible over 1~0. A path on G is said to be strictly d-bounded relative to f if every node n’ along that path satisfies 1 (n’)<d. It is known that if hgh*, then A* expands any node reachable by a strictly C*- bounded path, regardless of the tie-breaking rule used. The set of nodes with this property will be referred to as surely evanded by A*. (Nodes out- side this set may or may not be expanded depend- ing on the tie-breaking rule used.) In general, for arbitrary constant d and an arbitrary evaluation function f over (G,s ,I’,h), we denote by N/d the set of al1 nodes reachable by a strictly d-bounded path in G. For example, Nfig+h is the set of nodes surely expanded by A* over Im. The notion of optimality that we examine in this paper is robust to the choice of tie-breaking rules and is given by the following definition: Lkjkition: An algorithm A is said to be optimal over a class A of algorithms relative to a set I of problem instances if in each instance of I, every algorithm in A will expand all the nodes surely expanded by A in that problem instance. 2. IzEsuLTs 21 OptimaIity Over AIgorit.hms As Good As A+ Theorem 1: Any algorithm which is at least as good as A* will expand, if provided the heuristic informa- tion h4h8, all nodes that are surely expanded by A*, i.e., A * is optimal over relative ?t to 1~0. m Proof: Let I=( G,s ,I’,h) be some problem instance in 1~ and assume that n is surely expanded by A*, i.e., nEN$,. Therefore, there exists a path PO-,, such that j’ (n’) = g (n’)+h(n’) < C* Wx’ E PS,, Let D=ng;~ if (n’)] and let B be an algorithm in hJ. Obviously i;th A* and B will halt with cost C*, while D<C*. Assume that B does not expand n. We now create a new graph G’ (see figure 1) by adding to G a goal node t’ with h(t’)=O and an edge from n to t’ with non-negative cost D-C(PB-,,). Denote the extended path PsIr-t’ by P*, and let I’=(G’,s ,r u It’j,h) be a new instance in the algo- rithms’ domain. Although h may no longer be admissible on /‘, the construction of I’ guarantees that f (n’)SD if n’EP+, and thus, algorithm A* searching G’ will find a solution path with cost C,<D (Dechter & Pearl, 1983). Algorithm B, however, will search I’ in exactly the same way it searched I; the only way B can reveal any difference between I and I’ is by expanding n. Since it did not, it will not find solution path P* but will halt with cost C*>D, the same cost it found for I. This contradicts its pro- perty of being as good as A*. 2.2 Nonoptimality Over Algorithms Compatible’ with A+ Theorem 1 asserts the optimality of A* over a somewhat restricted class of algorithms, those which never return a solution more expensive than A*‘s, even in instances where non-admissible h are Figure 1 provided. If OUT problem space includes only admissible cases we should really be concerned with a wider class A, of competitors to A*, those which only return as good a solution as A* in instances of JADl regardless of how badly they may perform hypothetically under non-admissible h. We shall call algorithms in this class compatible with A*. Disappointedly, A* cannot be proven to be optimal over the entire class of algorithms compati- ble with it, and, in fact, some such algorithms may grossly instances. outperform A * in specific problem For example, consider an algorithm B guided by the following search policy: Conduct an exhaustive right-to-left depth-first search but refrain from expanding one distinguished node n e.g., the leftmost son of s. By the time this search is completed, examine n to see if it has the poten- tial of sprouting a solution path cheaper than all those discovered so far. If it has, expand it and con- tinue the search exhaustively. Otherwise, return the cheapest solution at hand. B is clearly compati- ble with A*; it cannot miss an optimal path because it would only avoid expanding n when it has sufficient information to justify this action, but oth- erwise will leave no stone unturned. Yet, in the graph of Figure 2a, B will avoid expanding many nodes which are surely expanded by A*. A* will expand node J1 immediately after s (f(J1)=4) and subsequently will also expand many nodes in the subtree rooted at J1. B, on the other hand, will expand J3, then select for expansion the goal node 7, continue to expand J2 and at this point will tit &thout ezpanding node J1. Relying on the admissi- bility of h, B can infer that the estimate h(Jl)=O is overly optimistic and should be at least equal to h(J2)-i=19, thus precluding J1 from lying on a path cheaper than (s , Js,~). Granted that A* is not optimal over its com- patible class 4, the question arises if an optimal algorithm exists altogether. Clearly, if A, possesses an optimal algorithm, that algorithm must be better than A+ in the sense of expanding, in some problem instances, fewer nodes than A* while never expanding a node which is surely skipped by A*. Note that algorithm B above could not be such an optimal algorithm because in return for skipping node J1 in Figure 2a it had to pay the price of expanding J2, yet J2 will not be expanded by A* regardless of the tie-breaking rule invoked. If we could show that this “node tradeoff” pattern must hold for every algorithm compatible with A*, and on every instance of Im, then we would have to con- clude that no optimal algorithm exists. Figure 2b, however, represents an exception to the node- tradeoff rule; algorithm B does not expand a node (Jl) which must be expanded by A* and yet, it never expands a node which A+ may skip. 2b We now show that cases such as may occur only in rare instances. that of Figure 7heorem 2: If an algorithm B, compatible with A*, does not expand a node which is surely expanded by A* and if the graph in that problem instance con- tains at least one optimal solution path along which h is not fully informed (h<h*), then in that very problem instance B must expand a node which may be avoided by A*. Proof: Assume the contrary, i.e., there is an instance I=( G,s ,r,h)EI m such that a node n which is surely expanded by A* is avoided by B and, at the same time, B expands no node which is avoided by A*, we shall show that this assumption implies the existence of another instance I’ E IRD where B will not find an optimal solution. I’ is constructed by taking the graph G4 exposed by a specific run of A* (including nodes in OPEN) and appending to it another edge (n,t’) to a new goal node t’, with cost c(n,t')=D'-k,(s,n) where D’=max (n')(n'EN$h, G 1 and k, (nl,n2) is the cost of the cheapest path from n1 to n2 in Go. Since G contains an optimal path P*ol along which h(n')<h*(n') (with the exception of 7 and pos- sibly s), we know that there is e tin-breaking rule that will guide A* to And PST and halt without ever expanding another node having f (n)=C*. Using this run of A* to define G,, we see that every nontermi- nal node in G, must satisfy the strict inequality g(n)+h(n)<C*. We shall tist prove that I’ is in IAD, i.e., that h(n')r;h@'I(n') for every node n' in G, . This inequality certainly holds for n' such that g (n')+h(n')2C* because all such nodes were left unexpanded by A* and hence appear as terminal nodes in G, for which h+'(n')== (with the exoepticn of 7, f~- which l~(7)=h*~.(7)=0). It remains, therefore, to verify the inequality for nodes n’ in NzJ, for which we have g(n’)+h(n’)4D’. A ssume such n’ E J$,, the c!ontrary, that ior some we have h(n')>h*',(n'). This implies a Figure 2 b 97 h(n’) > k, (n’,n) + c (nJ’> =k,(n’,n) + D’-k&P) ~kk,(n’,n) +k,(s,n’) +h(n’) -k+n) and OX- k, (s ,n) > k, (n’,n) + k, (s n’) in violation of the triangle inequality paths in G, . Hence, I’ is in 1~0. for cheapest Assume now that algorithm B does not gen- erate any node outside G,. If B has avoided expand- ing n in I, it should also avoid expanding n in I’; all decisions must be the same in both cases since the sequence of nodes generated (including those in OPEN) is the same. On the other hand, the cheapest path in I’ now goes from s to n to t ‘, hav- ing the cost D’<C l , and will be missed by B. This violates the admissibility of B on an instance in Im and proves that B could not possibly avoid the expansion of n without generating at least one node outside G,. Hence, B must expand at least one node avoided by A* in this specific run. m Theorem 2 can be given two interpretations. On one hand it is discomforting to know that neither A* nor any other algorithm is truly optimal over those guaranteed to find an optimal solution when given hSh*, not even optimal in the restricted case of ensuring that the set of nodes surely expanded by that algorithm is absolutely the minimal required.* On the other hand, Theorem 2 endows A* with some optimality property, albeit weaker than hoped; the only way to gain one node from A* is to relinquish another. Not every algorithm enjoys such strength. 2.3 Optimality Under Consistent Heuristics We shall now prove that conditions like those of Figure 2, which permit other algorithms to out- maneuver A*, can only occur in instances where h is nonconsistent; in other words, if in addition to being admissible h is also consistent (or monotone) then A* is optimal over the entire class of algorithms compatible with it. LJ@?ntin: A heuristic function h is said to be con- sistent if for any pair of nodes, TZ’ and n, the triangle inequality holds: h (n’)sk (n’,n)+h(n). Clearly, con- sistency implies admissibility but not vice versa. IReorem 3: Any algorithm which is admissible over 1m (i.e., compatible with A*) will expand, if provided a consistent heuristic h, all nodes that are surely expanded by A*. Au,of: We again construct a new graph G’, as in Figure 1, but now we assign to the edge (n,t’) a cost c =h(n)+b, where 6 = j$[ C*-D’] > 0 This finding, by the way, is not normally acknowledged in the literature. M&o (1981), for example, assumes that A+ is optimal in this sense, i.e., that no admissible algorithm equally informed to A * can ever avoid a node expanded by A *. Similar interpretations are suggested by the theorems of Gelperin (1977). This construction creates a new solution path P* with cost at most C*-6 and, simultaneously, (due to h’s consistency) retains the admissibility of h on the new instance I’. For, if at some node n’ we have h(n’) > h*p(n’) = mink(n’,n) + c 1 ; h?(n)]. then we should also have (given h (n’)Sh5(n)): h (n’) > k (n’,n) + c = k(n’,n) + h(n)+6 in violation of h’s consistency. In searching G’, algorithm AC will find the extended path P* costing C*-6, because: f (0 = 8 (n)+c = f (n)+6 I; D’+6 = C*-6 < C* and so, t’ is reachable from s by a path strictly bounded by C* which ensures its selection. Algo- rithm B, on the other hand, if it avoids expanding n, must behave the same as in problem instance I, halting with cost C* which is higher than that found by A*. This contradicts the supposition that B is both admissible and avoids the expansion of node n. . 2.4 Optimality Over Generalized Best-First. A@- IWlJllS The next result establishes A*‘s optimality over the set of generalized 6esSfirst (GBF) algo- rithms which are admissible if provided with hSh*. Algorithms in this set operate identically to A*; the lowest f path is selected for expansion, and the search halts as soon as the Arst goal node is selected for expansion. However, unlike A+, these algorithms will be permitted to employ any evalua- tion function f(P) where f(P) is a function of the nodes, the edge-costs, and the heuristic function h evaluated on the nodes of P, i.e. f P>kf (sJbn2 . . . ..n)=f~~~.[c(ni.ni+l)l.[h(~)~l~~P). Due to the path-dependent nature of f , a GBF algo- rithm would, in general, need to unfold the search graph, i.e., to maintain multiple paths to identical nodes. Under certain conditions, however, the algo- rithm can discard, as in A*, all but the lowest f path to a given node, without compromising the quality of the final solution. This condition applies when f is or&r preservin.g, i.e., if path* PI is judged to be more meritorious than P2, both going from s to n, then no cornmon extension (Ps) of PI and P2 may later reverse this judgement. Formally: f(P,kf (P2) - f PPskf (P2Ps) Clearly, both f =g+h and f(P)= max/g (n’)+h(n’)Jn’EP} are order preserving, and so is evkry combinatioi f =F(g ,h) which is monotonic in both arguments. The following results are stated without proofs (for a detailed discussion of best-first al o- rithms see Dechter 8c Pearl (1983) or Pearl (1983) f . Theorem 4: Let B be a best-first algorithm using an evaluation function f B (G,s ,l?,h) E IAD, fB satisfies: such that for every f (Pt)=f (s, nlrn2, . . . ,7)=C(Q) Vy E l?. If B is admissible for I,, then N$ s N$!i, and B expands every node in Nczh. Moreover, if fB is also of the form j’=F(g,hJ then F must satisfy F(z ,y)= +y . q An interesting implication of Theorem 4 asserts that any admissible combination of g and h, hsh *, will expand every node surely expanded by A*. In other words, the additive combination g +h is, in this sense, the optimal way of aggregating g and h for additive cost measures. Theorem 4 also implies that g(n) constitutes a sufficient summary of the information gathered along the path from s to n. Any additional informa- tion regarding the heuristics assigned to the ances- tors of n, or the costs of the individual arcs along the path, is only superfluous, and cannot yield a further reduction in the number of nodes expanded with admissible heuristics. Such information, how- ever, may help reduce the number of node evalua- tions performed by A* (Martelli, 1977; M&o, 1981). REFEREINCXS Barr, A. and Feigenbaum, E.A. 1981. Hundboolc of Artificial Intelligence, Los Altos, Calif.: Wil- liam Kaufman, Inc. Dechter, R. and Pearl, J. 1983. “Generalized best- flrst strategies and the optirnality of A*," UCLA-ENG8219, University of California, Los Angeles. Gelperin, D. 1977. “On the optimality of A*“. Arti.cid Intelligence, vol. 8, No. 1, 69-76. Hart, P.E., Nilsson, N.J. and Raphael, B. 1968. “A for- mal basis for the heuristic determination of minimum cost paths.” IEEE l%uns. Sys- tems Science and Cybernetics, SSC-4, No. 2, 100-107. Martelli, A. 1977. “On the complexity of admissible search algorithms.” vol. 8, No. 1, 1-13. Artificial Intelligence, M&o, L. 1981. “Some remarks on heuristic search algorithms.” Proc. of Int. Joint Conf. on ti, Vancouver, B.C., Canada, August 24-28, 1981. Nilsson, N.J. 1971. Problem-Solving Methods in Artificial Intelligence, New-York: McGraw- Hill. Nilsson, N.J. 1980. Principles of Artificial InteUi- gence, Palo Alto, Calif.: Tioga Publishing co. Pearl, J. 1983. HEURISTICS: Intelligent Search Strategies, Reading, Mass.: Addison- Wesley, in press. 99
1983
86
284
ABSTRACT This paper describes how integrity con- straints, whether user supplied or automatically generated during the search, and analysis of failures can be used to improve the execution of function free logic programs. Integrity con- straints are used to guide both the forward and backward execution of the Programs. This work applies to arbitrary node and literal selection functions and is thus transparent to the fact whether the logic program is executed SeWentiallY or in parallel. 1. Introduction - 1.1. The Problem -- -- Interpreters for logic programs have employed, in the main, a simple search strategy for the exe- cution of logic programs. PROLOG (Roussel Warren C19791, c19751, Roberts C19771) the best known and most widely used interpreter for logic programs employs a straightforward depth first search stra- tegy augmented by chronological backtracking to execute logic programs. This control strategy is 'blind' in the sense that when a failure occurs, no analysis is done to determine the cause of the failure and to determine the alternatives which may avoid the same cause of failure. Instead the most recent node where an alternative exists, is selected. This strategy has the advantage that it is efficient in that no decisions need to be made as to what to select next and where to backtrack to. However, the strategy is extremely inefficient when it backtracks blindly and thus repeats failures without analyzing their causes. Pereira C19821, Bruynooghe Cl9781 and others have attempted to improve this situation by incor- porating the idea of intelligent backtracking within the framework of the PROLOG search strategy. In their work the forward execution component remains unchanged, however, upon failure their sys- tems analyze the failure and determine the most recent node which generated a binding which caused the failure. This then becomes the backtrack node. This is an improvement over the PROLOG strategy but still suffers from several drawbacks. Their scheme * This work was supported in part by AFOSR grant 82-0303 and NSF grant MCS-79-19418. INTELLIGENT CONTROL USING INTEGRITY CONSTRAINTS Madhur Kohli Jack Minker Department of Computer Science, University of Maryland, College Park, MD 20742 works only for a depth first search strategy and always backtracks to the most recent failure caus- ing node. Also, once the backtrack node has been selected, all information about the cause of the failure is discarded. This can lead to the same failure in another branch of the search tree. A node in the search space is said to be closed when it has provided all the results possi- ble from it. In most PROLOG based systems a node cannot be closed until every alternative for that node is considered. However, by using integrity constraints as will be shown later, a node can be closed once it is determined that exploring further alternatives for that node will not provide any more results. When executing a logic program it is often desirable to permit an arbitrary selection function and to have several active nodes at any given time. It is also useful to be able to remember the causes of failures and use this information to guide the search process. 1.2. Function Free Logic Programs and Integrity Co&traints The theory we treat is that of function-free Horn clauses as described in Kohli and Minker C19831. It is assumed that the reader is familiar with Horn clause logic programs as described in Kowalski [1979]. We assume throughout the paper that the inability to prove an atom implies its negation (Clark[1978] and ReiterC19781). An integrity constraint is an invariant that must be satisfied by the clauses in the knowledge base. That is, if T represents a theory of func- tion free logic programs and IC represents a set of integrity constraints applicable to T, then T U IC must be consistent. 'e closed function free Horn Integrity constraints ar formulae of the form: (a) <- P,,...,P,, or (b) Q <- P,,...,P,, or (c) E,,E2,...,En <- P,, where the E., i=l,. 1 cates i.e. each where at least one ables. . . . . 'rn ..,n, are equality predi- Ei is of the form xi = y. 1 of the x., 1 y. 1 are vari- Thus, an integrity constraint of the form represents negated data, in the sense (a>, that 202 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. PI /\ P2 /\ . . . /\ Pm can never hold if T U IC is consistent. An integrity constraint of the form (b), states that if PI /\ P2 /\ . . . /\ Pm holds then Q must be provable from the knowledge base. Integrity constraints of the form (c), represent dependencies between the arguments of P&9 Pm' . . . . 2. Goals and Integrity Constraints - -- 2.1. Integrity Constraints to Limit Forward Execu- ---- tiOn Though integrity constraints are not necessary for finding the solution of a given set of goals with respect to a given logic program (Reiter C197811, they can greatly enhance the efficiency of the search process and thus improve the performance of the problem solver (McSkimin and Minker [19771, King CI9811). Integrity constraints enable the semantics of the given domain to direct the search strategy by enabling the problem solver to prune those alterna- tives which violate integrity constraints and thus focus the search. Thus integrity constraints influence the forward execution of the problem solver by enabling it to detect which sets of goals are unsolvable because they violate integrity con- straints. This avoids exploring alternatives which must fail after a, possibly lengthy, full search. Thus whenever a new set of subgoals is gen- erated, this set can be tested to determine if it violates any integrity constraints. If so, the node in question can be discarded and another path considered. 2.2. Implementation and Search Strategy -- -- There are several forms of integrity con- straints as described in Section 1.2. Whenever a new set of goals is generated it must be tested to determine if it violates an integrity constraint. Though each of the forms (a), (b), and (c) above require slightly different treatments to determine if they are violated, the underlying mechanism for each is the same. Form (c) constraints can be transformed into form (a) by moving the disjunction of equalities on the left into a conjunction of inequalities on the right, i.e., EI,E2,-.,En <- PI, . . . . Pm is equivalent to -- <- PI,...,Pm,EI,E2,...,~. ing Form (b) can be interpreted to mean that solv- Q is equivalent to solving pI ,...,P, and thus p1 ,o*e,Pm can be replaced by Q in the set of goals. Since it is only necessary to determine if the right hand side of some integrity constraint can subsume the goal clause, an extremely straightfor- ward algorithm can be used. A clause C subsumes a clause D iff there exists a substitution c such that CoE D. The subsumption algorithm executes in linear time and does not increase the complexity of the search (Chang and Lee [1973]). Consider now, how the various forms of integrity constraints can be used to limit the for- ward execution. If a form (a) constraint subsumes a newly gen- erated goal clause, the goal violates the con- straint and can be deleted from the search space. Whenever a literal is solved, it must be determined whether it unifies with a literal in the right hand side of a form (c) constraint. If so, the resulting substitution is applied to the con- straint, the solved literal is deleted, and the clause is added to the set of integrity con- straints. For example, if x,=x2 <- P(x,x,),P(x,x2) is a constraint and P(a,b) is solved then P(a,b) unifies with a literal in the above constraint with the substitution set (a/x, b/x.,). The revised con- straint x2=b <- P(a,x,) is then obtained and added to the set of integrity, constraints. Now any node containing P(a,x) can be considered a deterministic node, since only one possible solution for P(a,x) exists. Finally, if the right hand side of a form (b) constraint subsumes the goal, then the resulting substitution is applied to the left hand side of the constraint and a new alternative goal with the left hand side substituted for the right hand side of the constraint, is generated. For example, if Q(x,z> <- 5(x,y),P2(Y,d is a constraint, and the goal clause under con- sideration is <- R(a,u),PI(a,y),P2(y,z),S(b,z) then <- PI(X,Y),P2(Y,Z) subsumes the goal with substitution {a/x). Apply- ing this substitution to <- Q(x,z) results in <- Q(a,z). Generating an alternative node with Q replacing P,, 2 P then results in the above goal node being repladed-by the following OR-node Parent Node Whenever a violation of an integrity con- straint occurs, it is treated as a failure. This results in failure analysis and backtracking which are detailed in the next section. 20 Local and Global Conditions --- Global conditions are integrity constraints which are applicable to every possible node in the search space. Local conditions are integrity con- straints which are generated during the proof pro- cess and which are applicable only to the descen- dant nodes of some given node in the search space. 3.1. Failure -- - The failure of a literal can provide informa- tion for directing the search. A literal 'fails' when it cannot be unified with the head of any clause in the knowledge base. Since this failure means that the literal cannot be proven in the current knowledge base, because of the assumption of failure by negation, the literal's negation can be assumed to hold. Thus, this literal can be viewed as an implicit integrity constraint, and the failure can be viewed as a violation of the integrity constraint. Thus, every failure can be viewed as a violation of some integrity constraint, implicit or explicit. This allows us to extract useful information from every failure, and to use this information in directing the search. The possible causes of unification conflicts are: (a) The literal is a pure literal. That is, there is no clause in the knowledge base, which has as its head the same predicate letter as the literal selected. This implies that any literal having the same predicate letter as the selected literal, will fail anywhere in the search space. This informa- tion can be useful in terminating other branches of the search tree in which a literal containing this predicate letter occurs. Thus if P(a,x) is a pure literal, then all of its argument positions can be replaced by distinct variables and the resulting literal can be added to the set of integrity con- straints as a form (a) constraint, i.e., <- P(x,,x2) is added to the set of constraints. (b) There are clauses in the knowledge base which could unify with the selected literal, but which do not unify because of a mismatch between at least two constant names. In this case the selected literal can never succeed with that particular set of arguments. This information can be used as an integrity constraint. For example, if the selected literal is P(x,a,x) and the only P clauses in the knowledge base are P(nil,nil,nil) <- P(z,z,b) <- PI(z,b),P2(z) then the unification fails and <-P(x,a,x) can be added to the set of integrity constraints. 3.2. Explicit and Implicit Integrity Constraints -- Integrity constraints may be either explicit or implicit. Explicit constraints are those pro- vided initially in the domain specification. These constraints affect the forward execution of the problem solver as detailed in Section 2 and can be used in the derivation of implicit constraints. Implicit constraints are generated during the proof process, i.e., during the solution of a specific set of goals. These constraints arise out of the information gleaned from failure as shown in section 3.1, and from successes in certain contexts as will be shown in later sections. These con- straints may be considered to be implicit in the sense that they are not explicitly supplied but are derived during the proof process. 1.3. Applicability of Integrity Constraints - -- An integrity constraint may be globally or locally applicable. It is globally applicable if it can be applied to any node in the search space. Explicit constraints are always globally applicable since they are defined for the domain and are independent of any particular proof tree. Implicit constraints may be either locally or globally applicable. A locally applicable constraint is one which must be satisfied by a given node and all its chil- dren. Any node which is not part of the subtree rooted at the node to which the constraint is locally applicable, need not satisfy the con- straint. Locally applicable constraints are derived from the failure of some path in the search space. The analysis of the cause of the failure results in the generation of a locally applicable constraint which is transmitted to the parent node of the failure node. This local constraint must then be satisfied by any alternative expansions of the node to which it applies. This effectively prunes those alternatives which cannot satisfy the constraint. The following example illustrates these techniques. Logic Program: - P(a,b) <- Q(y,z) <- Q,h,x), Q2(z,y) Q(y,z) <- Q3(z,y) Q,(b,d) <- Q2(b,b) <- Q2(c,c) <- Q2(c9b) <- Q2(x,y) <- Q4(c,x) Q3(c,b) <- Q4(x,x) <- Query: <- p(x,y),Q(y,z) Search Tree: -- (I) <- ~(x,x),Q(~,z) -- From the above search tree, Node 4, <- Q,(c,x) can be propagated as a global imp1 ici .t constrai .nt since <- Q,(c,x) can never be solved Also, z = c can be propagated as a local implicit constraint to node 3 and thus later prevent the generation of node 7. This constraint is local to node 3 and its children since that is the node that bound z to c. As can be seen from the example an alternative expansion of node 2 giving node 8 succeeds with z bound to c. 3.4. Generation and Propagation of Conditions -- - Implicit constraints are generated at leaf nodes of the search space and are propagated either globally or as locally applicable constraints to some parent of the leaf node. Rules for generating and propagating implicit constraints are detailed below. When a goal fails along all paths, then that goal along with its current bindings is propagated as a global integrity constraint. Thus, if PM c( sta~ts2*~o;~~~~~l~~ere the di, i = I,...,n aye co:; , fails for every expansion p, then <- P(d, d, . . ..d ) is a global because it can ,ne+er szcceed in the constraint current knowledge base. Since that goal can never succeed with its current bindings, alternatives which give rise to different bindings for its arguments must be tried. Thus those nodes which created these bindings receive as local constraints the information that these bindings must not be repeated along alterna- tive expansions of those nodes. That is, if P(d 4 . . ..c( ) fails and there is some ancestor P' (3f 6'such that some di of P is bound to some xi of P' and xi is contained in some literal (other than P') in the clause containing P', then -- <- x. = cl. is a local integrity constraint for the clauke coktaining P'. If there are several d. which have been bound in different ancestor clause& of P, the conjunction of these bindings must be propagated to the binding clauses. Local constraints which are propagated to a node by a descendant of the node must then be pro- pagated to all other descendants of that node. This is because, as was noted above, the binding to X. in the node containing Pi was due to the selec- tton of some literal other than Pi in that node. Thus, Pi will be present in every expansion of that node and the binding of x i will cause Pi to eventu- ally fail. Consider a node P which has several children P,' P2’ . . . . ‘n’ Associated with each Pi is a set of local integrity constraints generated by its des- cendent nodes. Then if there is some local integrity constraint associated with every Pi, that integrity constraint is propagated to P. 4. Summary -- A control strategy has been developed for function-free logic programs to permit intelligent search based both on domain specific information in the form of integrity constraints and on an analysis of failures. Integrity constraints limit search in the forward direction, while failure analysis results in the creation of integrity con- straints. Failure analysis is also used to deter- mine backtrack points more likely to succeed. The concepts of local and global constraints are used to inhibit exploring fruitless alternatives. Sub- sumption is employed to take advantage of the con- straints. In Kohli and Minker C19831, a logic pro- gram is specified for an interpreter which will perform the above. We intend to incorporate these concepts into PRISM, a parallel logic programming system [Kasif, Kohli and Minker 19833, under development at the University of Maryland. Cl1 Bruynooghe, M., Intelligent Backtracking for an Interpreter of Horn Clause Logic Programs, Report CW 16, Applied Math and Programming Division, Katholieke Universiteit, Leuven, Belguim, 1978. c21 Chang, C.L., and Lee, R.C.T., Symbolic Logic and Mechanical Theorem Proving, Academic Press, New York, 1973. c31 Clark, K.L., "Negation as Failure,,, in Logic -- and Databases, H. Gallaire and .J. Minker, Eds., Plenum Press, New York, 1978, PP 293- 322. II41 Kasif, S., Kohli, M., and Minker, J., PRISM: A Parallel Inference System for Problem Solving, Technical Report, TR-1243, Dept. of Computer Science, University of Maryland, College Park, 1983. [51 King, J.J., Query Optimization by Semantic Reasoning, Ph.D Thesis, Dept of Computer Sci- ence, Stanford University, May 1981. C61 Kohli, M., and Minker, J., Control in Logic Programs using Integrity Constraints, Techni- cal Report, Department of Computer Science, University of Maryland, College Park, 1983. c71 iSI Kowalski, R.A., Logic for Problem Solving, North-Holland, New York, 1979. McSkimin, J.R., and Minker, J., The Use of a Semantic Network in a Deductive Question Answering System, Proceedings IJCAI-77, Cam- bridge, MA, 1977, pp 50-58. c91 Pereira, L.M., and Porto, A., Selective Back- tracking, in Logic Programming, K.L. Clark and S-A. Tarnlund, Eds., Academic Press, New York, 1982, pp 107-114. Cl01 Reiter, R., On Closed World Logic and Databases, H. Gallaire and J. Minker, Eds., Plenum Press, New York, 1978, pp 55-76. 1111 Roberts, G.M., An Implementation of Y.S. Thesis, University of Waterloo, I El21 Roussell, P., PROLOG: Manuel de Reference et d'utilisation. Groupe d'Intelligence Artifi- cielle, Universite d'Aix-Marseille, Luminy, 1975. Cl31 Warren, D.H.D., Implementing PROLOG: Compiling Predicate Logic Program, Department of Artifi- cial Intelligence, University of Edinburgh. Research Reports 39 and 40, 1979. REFERENCES Data Bases, in PROLOG, 977. 205
1983
87
285
THE COMPOSITE DECISION PROCESS: A UNIFYING FORMULATION FOR HEURISTIC SEARCH, DYNAMIC PROGRAMMING AN'D BRANCH & BOUND PROCEDURES+ Vipin Kumar* Laveen Kanal'k* *Department of Computer Science, University of Texas at Austin, Austin, TX "*Department of Computer Science, University of Maryland, College Park, MD ABSTRACT In this short paper we present a brief exposi- tion of a composite decision process - our unifying formulation of search procedures - which provides new insights concerning the relationships among heuristic search, dynamic programming and branch and bound procedures. 1. Introduction Various heuristic procedures for searching And/Or graphs, game trees, and state space repre- sentations has appeared in the A.I. literature over the last few decades, and at least some of them have been thought to be related to dynamic programming (DP) and branch and bound (B&B) pro- cedures of Operations research (O.R.). But the relationships between these classes of procedures have been rather contoversial. For example, Pohl argues in [22] that heuris- tic search procedures are very different from B&B procedures, whereas Hall [5] and Ibaraki [8,10] claim that many heuristic procedures for searching state space representations are essentially B&B procedures. Knuth does not consider the alpha- beta game tree search algorithm to be a B&B proce- dure; he considers its less efficient version (called Fl in his classical treatment of alpha- beta [14] to be branch and bound. But, Reingold et al. [23] consider alpha-beta to be a type of B&B. While describing the algorithm HS (which is the same as the AO* And/Or graph search algorithm [211) in [HI, Martelli and Montanari state that their algorithm is different from B&B because "(BGrB)technique does not recognize the existence of identical subnroblems, But Ibaraki's B&B proce- dure [lo] does recognize the existence of identical subproblems. While describing the heuristic search procedure A? for finding a shortest path in a state space, Nilsson [20] considers dynamic programming to be essentially a breadth-first search method. However, Dreyfus & Law [4] show that Dijkstra's algorithm for the shortest path [2], an algorithm very similar to Ah, can be viewed as a dynamic programming algorithm. Morin Sr Marsten [19] permit DP computations to be augmented with bounds, which $Paper presented at the 1983 AAAI meeting; Proc. American Assoc. for A.I., Wash., DC, Aug. 83. Research partially supported by NSF grants ECS- 78-22159 and MCS-81-17391. means that they do not consider it necessary that DP computations be breadth-first. The relationship between B&B and dynamic pro- gramming techniques has also been rather controver- sial. Kohler [15] and Ibaraki 191 discuss how a number of dynamic programming procedures can be stated in the framework of B&B. Morin & Marsten [19] consider some classes of B&B procedures as dynamic programming procedures augmented with some bounding operations. Ibaraki's work [7,10] seems to imply that dynamic programming is a more general problem solving scheme than B&B for solving discrete optimization problems. Smith [24] presents a k-adic problem reduction system model as a model for opti- mization problems and considers, in that context, dynamic programming to be a bottom up approach, and B&B to be a top down procedure. We have developed a methodology whereby most of these procedures can be viewed in a unified manner [171. The scheme reveals the true nature of these procedures, and clarifies their relationships to each other. The methodology also aids in synthe- sizing (by way of analogy, suggestions, induction, etc.) new variations as well as generalizations of the existing search procedures. In the rest of this short paper, we present a brief exposition of our unified approach to search procedures and discuss how it unveils the true nature and interrelationships of these procedures. 2. A Unified Approach A large number of problems solved by dynamic programming, heuristic search, and B&B can be con- sidered as discrete optimization problems, i.e., the problems can be stated as: find a least cost (or largest merit) element of a given set X. In most of the problems of interest, X is too large to make the exhaustive enumeration for finding an opti- mal element practical. But the set X is usually not unstructured. Often it is possible to view X as a set of policies in a multistage decision process, or as a set of paths between two states in a state space, or as a set of solution trees of an And/Or graph. These and other ways of representing X immediately suggest various tricks, heuristics, and short cuts to finding an optimal element of X. These short cuts and tricks were developed by researchers in different areas, with different per- spectives and for different problems; hence, it is 220 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. not suprising that the formal techniques developed look very different from each other, even though they are being used for similar purposes. We have developed the concept of composite decision process (defined below) as a general model for formulating discrete optimization problems. This model provides a good framework for represent- ing problem specific knowledge such that it can be usefully exploited for finding an optimum element of x. We have also developed systematic proce-- dures for exploiting such knowledge for the effi- cient discovery of an optimum element of X, and shown that many of the existing search procedures are special cases of our procedures. 2.1 Composite Decision Processes _- A composite decision process (CDP) C = (G,t,c) is a 3-tuple where G = (V,N,S,P) is a context-free grammar (V,N,P, and S are respectively the sets of terminal symbols, nonterminal symbols, productions and the start symbol), t denotes the set of cost attributes associated with productions of G (a real valued k-ary cost attribute tp(.,...,.) is associated with each production p = "w + wl- . . . wk" of G), and c is a real valued cost function defined over the set of terminal symbols of G. The set of parse trees rooted at the start symbol S of G represents the discrete set X of the optimization problem formulated by C. A cost f(T) is assigned to each parse tree T of G in terms of the set of cost attributes t and the function c. We first define a real valued function cT over the nodes n of a parse tree T of G as follows: (i> If n is a terminal symbol then (1.a) c,(n) = c(n). (ii) If n is a nonterminal symbol and nl,...,nk are descendents (from left to right) of n in T (implying that G has a production p = "n + nl . . . nk") then (1. b) cT(n) = tP(cT(nl),...,cT(nk)). If n is the root symbol of a parse tree T then we define (2) f(T) = c,(n). For a node of a parse tree T, CT(n) denotes the cost of the subtree of T rooted at n. For a production p: n -t nl . . . nk, t (xl,...,xk) denotes the cost of a derivation tree 'T rooted at n if the costs of the subtrees of T rooted at nl,...,nk are xl,...,xk. Thus the cost of a parse tree is recursively defined in terms of the costs of its subtrees. See Fig. 1 for an illustration. The minimization problem for the composite decision process C can be stated as follows: find a parse tree T* rooted at the start symbol S such that f(T*) = min{f(T) 1 T is a parse tree rooted as S]. We next introduce a type of composite decision process for which the minimization problem can be reduced to the problem of solving a system of recurrence equations. Monotone Composite Decision Processes A CDP C = (G(V,N,S,P),t,c) is called a mono- tone composite decision process (MCDP) if all of the k-ary cost attributes tp associated with the productions p: n -f nl . . . nk are monotonically non- decreasing in each variable, i.e., tp(xl,..., Xk) 5 tP(Yl,...'Yk). For a symbol n of the grammar G, let c*(n) denote the minimum of the costs of the parse trees rooted at n. The following theorem proved in [17] establishes that for a monotone CDP, c9<(n) can be defined recursively in terms of {c;k(m> ) m is a part of a string directly derivable from n]. Theorem 1: For a monotone composite decision process c=(G,t,c) , the following recursive equations hold. (3a) If n is a nonterminal symbol then C’(n) = min(tp(c"(nl),...,c"(nk) 1 p: n + "1 . . . nk is a production in G]. (3b) If n is a terminal symbol then c*(n) = c(n). For many interesting special cases there exist efficient algorithms which solve these equations to compute c*(S), the smallest of the costs of the parse trees rooted at S [17]. These algorithms can often be easily modified to build a least-cost parse tree rooted at S. In such a case, the minimization problem of a monotone CDP becomes equivalent to solving a system of recursive equations. Relationships with Dynamic Programming --~- Note that solving an optimization problem by Bellman's dynamic programming technique also involves converting the optimization problem into a problem of solving a set of recursive equations. Since most of the discrete optimization problems solvable by the conventional dynamic programming technique (and many more) can be stated in the monotone CDP format, we can consider the monotone composite decision pro- cess as a generalized dynamic programming formula- tion, and the recursive equations of Theorem 1 as the generalized recursive equations of dynamic pro- gramming. It is also possible to state a principle simi- lar to Bellman's principle of optimality (all sub- policies of an optimum policy are also optimal). First, let us define the optimality criterion for a parse tree (the counterpart of Bellman's "policy" in our formulation). A parse tree rooted at symbol n of G is called an optimum parse tree if its cost -___- (f-value) is the smallest of all the parse trees rooted at symbol n of G. Lemma 1: For a monotone composite decision process C = (G,t,c), for every symbol n of G, there exists an optimal parse tree rooted at n, all of whose sub- trees (rooted at the immediate successors of n) are optimal. Proof: See [17]. 221 Thi s statement is somewhat different (in fact "weaker") than Bellman's principle of optimality. Nevertheless, Lemma 1 guarantees that an optimal parse tree can always be built by optimally choosing from the alternate compositions of only the optimal subtrees. This technique of first finding the optimal solution to small problems and then cons- strutting optimal solutions to successively bigger problems is at the heart of all dynamic programming algorithms. A stronger statement much similar to Bellman's principle of optimality can be made for a subclass of monotone CDP. A CDP C = (G,t,c) is called a strictly monotone CDP (SMCDP), if all the k-ary functions t associated with productions p: n + nl . . . nk a!e strictly increasing in each variable, i.e., Xi<yi and x*<y. for j#i and l<j<k ===> -- tp$,..., xk> < +y:,...,y& Lemma 2: For a strictly monotone CDP C = (G,t,c), all the subtrees of an optimal parse tree rooted at a symbol n of G are also optimal. Proof: See [17]. Relationships with Sequential Decision Processes If the context-free grammar G of a CDP C = (G,t,c) is further restricted to be regular then we can use the direct correspondence between regular grammars and finite state automata to show that C is essentially a sequential decision process (SDP). The concept of SDP was introduced by Karp [13] as a model for the problems solved by dynamic programm- ing. The concept of a sequential decision process has been extensively studied in various areas in different guises. State space descriptions used in Artificial Intelligence to solve various problems are essentially sequential decision processes. The minimization problem of a SDP is essentially a generalized version of the well known shortest path problem studied extensively in the Operations Research literature [3]. Various Branch & Bound, dynamic programming and heuristic search procedures have been developed for problems which can be modeled by SDPs (e.g., [71, [lo], [41, [201). Generalized versions of many of these procedures are also applicable to problems modeled by CDPs (see E171). In fact we came up with the concept of a CDP as a generalization of the concept o,$ an SDP. The Scope of the CDP Formulation ----- The concept of composite decision process is very important. In addition to the problems modeled by SDPs, it models a large number of problems in A.I. and other areas of computer science which can not be naturally formulated in terms of SDPs. The wide applicability of CDPs becomes obvious when we notice that there is a direct natural correspondence between context-free grammars and And/Or graphs used in A.I. applications to model problem reduction schema [6]. Due to this correspondence, the speci- fication of a problem by an And/Or graph can often be viewed as its specification in terms of a CDP, and the problem of finding a least cost solution tree of an And/Or graph becomes equivalent to the minimization problem of its corresponding CDP. Due to the correspondence between And/Or trees and game trees, the problem of finding the minimax value of a game tree can also be represented in the CDP formulation. Furthermore, many other important optimization problems such as the problem of con- structing an optimal decision tree, constructing an otpimal binary search tree, finding an optimal sequence for matrix multiplication can be naturally formulated in terms of CDPs (see [17]). 2.2 Solving the Minimization Problem ~- We have shown in [17] that , in its full gener- ality, the minimization problem of a monotone CDP (hence of a CDP) is unsolvable. However, we identi- fied three interesting special cases (acyclic mono- tone CDP, positive monotone CDP, strictly monotone CDP) of monotone CDPs for which the minimization problem is solvable. In all the three cases a least cost parse tree of G rooted as S can be provably identified by generating and evaluating only a finite number of parse trees of G, even though G may generate infinite number of parse trees. This is a sufficient proof for the solvability of their minimization problems. But even these finite parse trees can be too many. In the following we briefly discuss two general techniques to solve the minimi- zation problems of these CDPs in a manner which can be much more efficient than the simple enumeration. The Generalized Dynamic Programming Technique One way of solving the minimization problem of a given monotone CDP C = (G(V,N,S,P),t,c) is to successively find (or reverse the estimates of) c*(n) for the symbols n of the grammar G until c*(S) is found. Viewed in terms of And/Or graphs, bigger problems are successively solved starting from the smaller problems. The term "bottom up" is quite suggestive of this theme and is often used for many search procedures which are special cases of the technique discussed above. Historically, many of these procedures have also been called Dynamic Programming computations. Furthermore, the basic ideas of Dynamic Programming procedures - Bellman's principle of optimality and the recursive equations - are associated with monotone composite decision processes in their generalized forms. Hence we have named these bottom up procedures for minimization of CDPs as dynamic programming. In [17] we have presented dynamic programming procedures to solve the minimization problem of the three classes of CDPs. Interestingly, Ibaraki's procedures for solving the minimization problems of SDPs, Dijkstra's algorithm for shortest path, Knuth's generalization of Dijkstra's algorithm, Martelli and Montanati's bottom up search algorithm for constructing an optimal solution tree of an Acyclic And/Or graph, and many other optimization algorithms (usually termed as dynamic programming algorithms) can be considered as special cases of these procedures. The Generalized Branch-and-Bound Technique -. -__ --- and BJC, since these procedures are special cases of the general procedure. In this second technique, we start with some representation of the total (possible infinite) set of parse trees out of which a least cost parse tree needs to be found. We repeatedly partition this set (a partitioning scheme is usually suggested by the problem domain). Each time the set is partitioned, we delete all members of the partition for which it can be shown that even after eliminating the set, there is a least cost parse tree in one of the remaining sets. This cycle of partitioning and pruning can be continued until only one (i.e., a least cost) parse tree is left. This "top down" process of partitioning and pruning for determining an optimal element of a set has been used for solving many optimization prob- lems in Operations Research, where it is known as branch and bound (B&B). It is easy to see that the central idea of B&B - the technique of branching and pruning to discover an optimum element of a set - is at the heart of many heuristic procedures for searching state space, And/Or graphs, and game trees. But none of the B&B formulations presented in the O.R. literature adequately model And/Or graph and game tree search procedures such as alpha- beta, SSS" [27], AO* and B* [l]. This has caused some of the confusion regarding the relationship between heuristic search procedures and B&B. To remedy this situation, we have developed a formulation of B&B which is more general and also much simpler than existing formulations. We have further developed a B&B procedure to search for an optimum solution tree of an acyclic And/Or graph (i.e., to solve the minimization problem of an acyclic monotone CDP) which generalizes and unifies a number of And/Or graph and game tree search pro- cedures such as AO*, SSS*, alpha-beta, and B* [17]. 3. Concluding Remarks The generalized versions of DP and B&B (for solving the minimization problems of CDPs) provide a unifying framework for a number of heuristics search procedures. In particular, the B&B formula- tion for searching acyclic And/Or graphs has helped unveil the close relationship of alpha-beta with SSS", showing that if a minor modification is made in the B&B formulation of SSS*, the resulting pro- cedure is equivalent to alpha-beta (see [17,16]). This is most interesting, for alpha-beta as conven- tionally presented [14] appears very different from SSS* as described by Stockman [27]. Considering that alpha-beta has been known for over twenty years, it is noteworthy that SSS* was discovered only recently in the context not of game playing, but of a waveform parsing system [26]. Perhaps if an adequate B&B formulation for alpha-beta had been available earlier, SSS* would have been developed as a natural variation of alpha-beta. The B&B formulation also makes it easy to visualize many variations and parallel implementations of SSSfc presented in [17,11,12]. In [17] we also proved the correctness of a general procedure for searching acyclic And/Or graphs. This greatly simplifies the correctness proofs of algorithms such as AO*, SSS;k, We have considered B&B and DP as two distinct ways (a top down search procedure and a bottom up search procedure) of solving the minimization prob- lem of a CDP. However, it turns out that for an important subset of the problems formulated by the CDP, the class of DP algorithms becomes indistin- guishable from the class of B&B algorithms (see [171). This explains why DP and B&B algorithms for several optimization problems were thought to be related, and why B&B procedures of Ibaraki [lo] for solving the minimization problem of a SDP could also be viewed as DP computations. The general search procedures discussed in this paper make use of two types of information to effi- ciently find an element of a discrete set X - "syntactic" and "semantic". The syntactic informa- tion is present in the representation of X (e.g., by a context-free grammar, regular grammar, etc.), and is used in DP in the form of principle of opti- mality , and in B&B in the form of a dominance rela- tion. But the only semantic information used in these procedures is in the form of heuristics or bounds associated with subproblems or subsets of X. It should be interesting to investigate what other types of problem specific knowledge can be inte- grated into these search procedures or their varia- tions. We conjecture that such investigation will also be of help in improving problem solving search procedures which are not necessarily used for opti- mization problems. Context-Free Grammar U N G = ({a,b,d],{S,A},S,P) Productions: Cost Attributes: P,: S -f aA, tp (r1,r2) = min r1,r2 , P2: A -+ abd, P: tp (r 2 l,r2,r3) = rl+r2+r3, p3 : A+ad, tp (rl,r2) = r1+r2 p4 : S+aS, tP 3( rl,r2) = r1+r2 4 terminal costs: c(a) = 5, c(b) = 10, c(d) = 15. l(a) A composite decision process C = (G(A,N,S,P),t,p,c) Figure 1 223 c, (a) = 5 PJ-+h/ / C cTl(b)=10 T (a>=5 1 c Tl Cd)=15 16) The derivation tree Tl depicting derivation of aabd from S: f(T2) = cT2(S) = 5. Figure 1 can't. REFERENCES [II Berliner, H., The B* Tree Search Algorithm: A Best-First Proof Procedure, Artificial Intelli- gence 12, pp. 23-40, 1979. [21 Dijkstra, E.W., A Note on Two Problems in Con- nection with Graphs, Numer. Math. 1, pp. 269- ~ - 271, 1959. [31 Dreyfus, S.E., An Appraisal of Some Shortest Path Algorithms, Operations Research 17, pp. 395-412, 1969. [41 Dreyfus, S.E. and Law, A.M., The Art and Theory of Dynamic Programming, Academic Press, New York, 1977 [51 Hall, P.A.V., Branch-and-Bound and Beyond, Proc. Second Int'l. Joint Conf. on Artificial Intelligence, pp. 641-658, 1971. [61 Hall, P.A.V., Equivalence Between AND/OR Graphs and Context-Free Grammars, Comm. ACM 16, pp. 444-445, 1973. [71 Ibaraki, T., Solvable Classes of Discrete Dynamic Programming, J. Math. Analysis and Applications 43, pp. 642-693, 1973. [81 Ibaraki, T., Theoretical Comparison of Search Strategies in Branch and Bound, Int'l. Journal of Computer and Information Science, 5, pp. 315, 344. L91 [lOI Ibaraki, T., The Power of Dominance Relations in Branch and Bound Algorithms, J. ACM 24, pp. 264-279, 1977. Ibaraki, T., Branch-and-Bound Procedure and State-Space Representation of Combinatorial Optimization Problems, Inform, and Control 36, PP. l-27, 1978. = mint5,30) = 5 CT1 (‘li) = 5 + 10 + 15 = 30 [Ill [=I [I31 [I41 [I51 [I61 [171 Cl81 [I91 [201 [211 [221 v31 [241 v51 [261 v71 Kanal, L. and Kumar, V., Parallel Implementa- tions of a Structural Analysis Algorithm, Proc. IEEE Conf. Pattern Recognition and Image Pro- cessing, pp. 452-458, Dallas, August 1981. Kanal, L.N. and Kumar, V., A Branch and Bound Formulation for Sequential and Parallel Game Tree Search, Proc. 7th Int'l. Joint Conf. on A.I., pp. 569-571, Vancouver, August 1981. K=p, R.M. and Held, M.H., Finite-Space Pro- cesses and Dynamic Programming, SIAM, J. Appl. Math 15, pp. 693-718, 1967. Knuth, D.E., and Moore, R.W., An Analysis of Alpha-Beta Pruning, Artificial Intelligence 6, pp. 293-326, 1975. Kohler, W.H. and Steiglitz, K., Characteriza- tion and Theoretical Comparison of Branch and Bound Algorithms for Permutation Problems, J-ACM 21, pp. 140-156, 1974. Kumar, V. and Kanal, L., A General Branch and Bound Formulation for Understanding and Syn- thesizing And/Or Tree Search Procedures, Artificial Intelligence 21, pp. 179-197, 1983. Kumar, V., A Unified Approach to Problem Solv- ing Search Procedures, Ph.D. Dissertaion, Univ. of Maryland, College Park, 1982. Martelli, A. and Montanari, U., Optimizing Decision Trees Through Heuristically Guided Search, Comm. ACM 21, pp. 1025-1039, 1978 Morin, T.L. and Marsten, R.E., Branch and Bound Strategies for Dynamic Programming, Operations Research 24, pp. 611-627, 1976. Nilsson, N., Problem-Solving Methods in Arti- ficial Intelligence, McGraw-Hill, New York, 1971. Nilsson, N., Principles of Artificial Intelli- gence, Tioga Publ. Co., Palo Alto, CA, 1980. Pohl, I., Is Heuristic Search Really Branch and Bound?, Proc. 6th Annual Princeton Conf. Infor. Sci. and Systems, pp.370-373, 1972 Reingold, E., Nievergelt, J. and Deo, N., Combinatorial Optimization, Prentice-Hall, 1977. Smith, D.R., Representation of Discrete Opti- mization Problems by Dynamic Programs, Tech. Rep. NPS 52-80-004, Naval P. Smith, D.R., Problem Reduction Systems, unpub- lished report, 1981. Stockman, G.C. and Kanal, L., Problem-Reduction Representation for the Linguistic Analysis of Waveforms, IEEE Trans. PAM1 5, 3, May 1983. Stockman, G.C., A Minimax Algorithm Better Than Alpha-Beta?, Artificial Intelligence 12, pp. 179-196, 1979. 224
1983
88
286
SOLVING the GENERAL CONSISTENT LABELING (or CONSTRAINT SATISFACTION) PROBLEM: TWO ALGORITHMS and their EXPECTED COMPLEXlTlES Bernard Nude1 Dept Computer Science, Rutgers University, New Brunswick, N.J. 08903 ABSTRACT The Consistent Labeling Problem is of considerable importance in Artificial Intelligence, Operations Research and Symbolic Logic. It has received much attention, but most work has addressed the specialized binary form of the problem. Furthermore, none of the relatively few papers that treat the general problem have dealt analytically with the issue of complexity. In this paper we present two algorithms for solving the general Consistent Labeling Problem and for each of these the expected complexity is given under a simple statistical model for the distribution of problems. This model is sufficient to expose certain interesting aspects of complexity for the two algorithms. Work currently in progress will address more subtle aspects by extension to more refined satistical models. I INTRODUCTION The problem we consider here has received much attention under various names such as the Constraint Satisfaction Problem [ 11, the Consistent Labeling Problem 121, the Satisf icing Assignment Problem [ 31, the Discrtete Relaxation Problem [4] and the Relation Synthesis Problem [S]. We will use the term Consistent Labeling Problem or CLP.* Relatively little appears treating the general form of the Problem - some exceptions are [5, 7, 2, 81. The specialized binary form is treated in [3, 9, 10, 1 1, 123 and notably in C 13, 141 where analytic expected complexities are derived. A summary of I: 141 appears in [15]. The general Consistent Labeling Problem CLP is characterized by a finite list Z of n variables. Each variable z, has an associated finite domain D,, from which it can take any of M,, values or labels. Constraints exist on which values are mutually compatible for various subsets of the n variables. A solution-tuple or consistent-labeling assigns to each of the n variables a value in its corresponding domain such that all problem constraints are simultaneously satisfied. The goal is to find one or more solution-tuples. We analyze algorithms that find all solution-tuples for a problem. A constraint involving exactly r variables is said to be of arity r or to be r-ary. The sub-Problem of CLP containing all, and only, problem instances with at least one r-ary constraint, and no ++“CLP” denotes a family of problem instances C61. A specific instance of CLP will be denoted by the lower-case “clp”. Analogously, we write “Problem” (with an upper-case P) for a set of Instances and “problem” for a single instance. constratnts of arity greater than r we call the r-ary Consistent Labeling Problem rCLP. Note that an r-ary problem may or may not have constraints of arity less than r. For ease of presentation, the results here are for the pure r-ary Problem arCLP on n variables each of equal domain size M. Instances of this Problem have exactly one r-ary constraint for each of the ( y ) possible r-subsets of the n problem variables, and have no constraints of arity less than r. In [ 161 we generalize to instances whose variables may have different size domains and whose constraints may have different arities, with possibly zero, one or even several constraints, independently for any subset of variables. The theory there also distinguishes between otherwise identical problems whose constraints differ in their degree of constraint or compatibility of their argument variables. This makes possible relatively problem-specific prediction of complexity for any conceivable clp. It also allows capturing of the important search-order effects on complexity that one finds for clps. II STATISTICAL MODELS FOR PROBLEMS In [ 131 Haralick carries out an expected complexity analysis for two pure binary CLP algorithms (BT and FC below) under a simple statistical model for problem generation. We call this model 0. This model simply considers that for any pair of values asslgned to any pair of variables, the probability is p that they are compatibile with respect to the corresponding binary constraint. In [ 141 we extend Haralick’s work by carrying out expected complexity analyses under more complex models 1 and 2, which have the important advantage of capturing the effect on complexity of changes in search orders used by an algorithm Such analyses were thus useable to give theoretical insight into how to intelligently order the seach for solutions - often at significant savings in search effort All analyses to date however treat only the pure binary Problem n2CLP. In this first analytic work beyond the binary CLP case we will stay with the analogue of the binary statlstical model 0. Again this will capture no order dependence effects - but we nevertheless will obtain useful insight into the main features of algorithm complexity for solving the more general clps For the general CLP the analogue of the above statistical model 0 is simply this: for each possible tuple of values assigned to the argument variables of a constraint, the probability is p that the value-tuple belongs to (i.e. is compatibile with respect to) the constraint. We use this model for the pure r-ary CLP here. We expect 292 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. shortly to complete the analysis for the fully general CLP, under this and the more refined statistical models 1 and 2 Results under the model 2 will provide theoretical insight into order effects in the general case analogous to those made available in [ 141 for the pure binary case, as well as providing the analogous precision in predictrng a problem’s complexity. III ALGORITHMS In [ 131 Haralick empirically compares seven different algorithms for solving pure binary clps. For the experiments he conducted the ranking of algorithms obtained was essentially: best = wFC > FC > BM > PL > FL > BC > BT = worst where the abbreviations denoting algorithms are as used in c141. In particular, BT denotes the Backtracking algorithm [ 17, 181, FC denotes the Forward Checking algorithm and wFC is a variant of FC we call word-wise Forward Checking that exploits the bit-parallelism available in present machines. FC and wFC seem to have been independently discovered by Haralick [ 131 and McGregor [ 121, but in fact FC also appears in the earlier paper of Golomb and Baumert [ 171 where it is referred to as Backtracking with preclusion. In figures Ill- 1 and Ill-2 respectively we present recursive versions of our general Backtracking (gBT) and general Forward Checking (gFC.1 algorithms for solving arbitrary clps. These are quite natural generalizations of the corresponding pure binary algorithms BT and FC, which we now rename to be n2BT and n2FC. Common to both algorithms is the notion of an instantiation order** X = I: x1 x2 xn ] being some permutation of the conventionally ordered problem variables z = [ Zl 22 z, 1. By definition, a k-th level node of the gBT or gFC search tree is formed when variable xk is instantiated to (assigned) some value j7k from its corresponding domain DXk - this happens at lines 2 of gBT and gFC. At all k-th level nodes in either algorithm, instantiations or value assignments have been made for the same ordered sequence Ak = [ x1 x2 . xk ] Of the first k variables of (globally available) list X. Nodes at level k are distinct only in that they correspond to different instantiations made to the variables of A,. An instantiation sequence for the variables of A, is a list Ak = [X&. . ,, 1 of values x, E D, for the variables of A,. lt is built up in these algorithms’ using 1 1 the list concatenation operator. Such an instantiation sequence xk corresponds to a path through the search tree. Note that the node for a given instantiation sequence A,+, may not actually be generated due to the discovery of a violation of problem constraints at an earlier node Ak c Kk+, on that path. This is in fact where the usefulness of such algorithms arises since attempts to generate any candidate solution-tuples containing xk are then avoided. Initially for both alorithms k = 1, A,-, = 0. and are global In gBT all domains D, remain unchanged In gFC domains of some variables are In **We use square lrst or vector. brackets c . 1 throughout to denote a 1 gBT( k xk-, ) 2 Do for all xk e DXk 3 & -&-, 11 &I 4 lf gCheck( k A, ) + “xk Wipe-Out” 5 then if k < n then gBT( k+l Ak 1 6 eke print & 7 end 8 end gBT 1 gCheck( k Ak ) 2 Do for i = 1 to mk 3 If v,,( A, ) c T,, 4 then return “j& wipe-out” 5 end 6 Return “No xk wipe-out” 7 end gCheck Figure Ill-l gBT and its subroutine gCheck 2 3 4 5 6 7 8 9 10 11 gFCI k -ij;,-, D ) Do for all x, e D,, &-Ak-, 11 &,I If k < n then Do D’ - gfilterl k D Ak 1 If D’ + “D, wipe-out” then gFC( k+ 1 Ak D’ ) end eke print & end end gFC 1 gFilter( k D Ak ) 2 Do for all f e F, 3 Do for i = 1 to mkf 4 Do for all f e Df - -- 5 If v,,,( A, f ) e T,,, 6 then D, - Df - [?I 7 end 8 If D, = 0 then return “D, wipe-out” 9 end 10 end 1 1 D’ - [ DXk+, DXk+2 . . D,” ] 12 Return D’ 13 end gFilter Figure Ill-2 gFC and its subroutrne gFilter 293 general updated at each node and are passed as an argument in the vector of domains D = [D,,DY2. .D, r! 1. Testing tuples of instantiations for compatibility with respect to constralntscrs denoted as a test of membership of the tuple in the set of compatible tuples for the corresponding constraint (line 3 of gCheck and line 5 of gfilter1. In practice these tests will usually be carried out on a constraint defined intensive/y via a procedure rather than defined extensive/y as a set. The algorithm and our analysis below are compatible with both representations. Algorithm QBT: At a level k node the current instantiation xk is tested for compatibility with respect to all mk constraints that involve xk and some of the k- 1 earlier variables of A,. For a pure r-ary clp mk = (!I: ) since this is the number of possible r-ary constraints over xk and r-l other variables to be chosen from the k-l variables of A, besides xk. Note that Clck<n ( !I,’ 1 = ( : 1 as required. The i-th constraint tested- cf no wipe-out prevents it) at each level k node we denote T,,. Its list of argument variables we denote by vki. The corresponding list Of Values for these variables, given the assignments Of A,,, is denoted vki( z;k I. This is simply the projection of xk onto vk,. For example if A, = [ z3 z6 z2 zg z4 1, A, = [ e a e g b 1 and the argument-list for constraint vki is Cz~z4Zgl,then&i(&J = [eba]. Given the - instantiations of A k-1, the current instantiation izk violates constraint T,, if vk;i ‘;li;, ) B T,, and this is tested at line 3 of gCheck. Algorithm gFC: In addition to A,, gFC also uses F, = c xk+l xk+2 x, 1 the list of future variables, or not yet instantiated variables, at level k. At a level k node, each future variable f e F, has each of its potential future instantiations ? e D, (as contained in D, the list of updated domains for future variables) tested for compatibility with respect to ali mkf constraints that involve xk, f and some of the k- 1 earlier variables of A,. For pure r-ary clps, mkf = ( :Zi 1 since this is the number of possible r-ary constraints over variables xk, f and r-2 other variables to be chosen from the k-l variables of A, besides xk. Note that since there are n-k future variables f at level k, there is for the pure r-ary CLP ( n-k )( !I: ) new constraints to check at level k and c tskln ( n-k I( 112 ) = ( : ) as required. The i-th such constraint involving f, tested at each level k node (if no wipe-out prevents it) we denote T,,,. Its list of argument variables iS vkf, and the list of values assigned respectively to these variables given the instantiations of Nk and the value 7 being tested for f, we denOteIkfi( ii, f 1. Given zk, value i violates constraint - - T kf, if vk,,( A, f 1 B T,,, and this is tested at line 5 of gfilter. Lack of compatibility leads to ? being removed or filtered from its domain D,. The sample trace for 2FC appearing in fig 4 of [ 141 may be helpful. Note that the selection order for f E F, at line 2 of gFilter may be a function of level, or even a function of the node. In fact one could generalize the algorithm by merging the loops at lines 2 and 3 of gFilter to give returning “D, wipe-out” as soon as one occurs However for the pure binary clp, mkf = (:I: 1 = 1 and only the order of selection of f from F, is an Issue. This is the case studied in c 141 where we use the theory to suggest global and local Consistency-check Ordering (CO) heuristics for ranking the f of F,. However when mkf f 1 the question becomes how to rank the pairs (f il. If the statistical model were refined enough one might even study the advantages of merging the loop at line 4 as well with those at lines 2 and 3. However under the present simple statistical model 0, the nesting of loops shown is optimal, and any residual indeterminism is irrelevant (with respect to average complexity) in either algorithm. In particular, the instantiation ordering X is irrelevant on average under model 0. For individual problems though a good ordering can lead to significant savings, and our more refined model 1 and 2 analyses of [ 141 capture this effect for the pure binary case. These theories are then used there to suggest theory-based global and local Instantiation Order (IO) heuristics that are found to be quite effective in reducing the complexity of problem solving. IV ANALYTIC RESULTS Under statistical model 0 it is easy to determine the probability P(clp) of a given problem clp - analogously to the model 1 result for the r = 2 case given in [ 14). In terms of P(clp) we can define the expected total number of nodes in a search tree (for a given algorithm) as N = &, N(clp) P(clp) where N(clp) is the actual total number of nodes generated for problem clp. Similarly, the expected total number of consistency-checks performed in the search tree (for a given algorithm) is by definition c &, C(clp) P(clp) where C(clp) is the actual total number 0: checks for problem clp. These expressions are not useful as they stand since N(clp) and C(clp) are not known analytically. However they can be tranformed into useable expressions. We can use Nklp) = &#n N(k clp) and Ciclp) = c Isksn C(k clp) where N(k clp) and C(k clp) are respectively the actual number of nodes generated and checks preformed at the k-th level in problem clp. In terms of these we can define the expected number of nodes and checks at the k-th level of the search tree, given by N(k) = cclp Nlk clp) Plclp) and c(k) = c,,, C(k clp) P(clp) The expected totals are at a level summed over Fi =c 1 Sk% m(k) and c = c Isksn c(k) By successive transformations of this kind, the following expected-value expressions are obtained in [ 161. As mentioned, for notational simplicity we present results for the pure r-ary CLP on n variables each variable having equal domain size M. We expect to present in [ 161 the fully general result under more refined statistical models 1 and 2. then expressible all levels as the expectation Do for (f i) E F, X i 1 to mkf ) in some order 294 Algorithm gBT k-l ) N(k) = Mk p’ r (1) c(k) = N(k) c(k) (2) ,Z(k) = [ 1 - p’ 1:: ) l/Cl-PI (3) Algorithm gFC: m(k) = Mk p( r k ) [ 1 - ( 1 _ p( Fz: 1 )M In-k c’(k) = m(k) E(k) (5) For either algorithm, c(k) is the expected number of checks performed at a node generated at leve! k. An expression for E(k) of gFC still remains to be determined. We can use the above results to compare gBT with gFC in terms of the relative number of nodes the algorithms generate. Figure - - IV- 1 shows the ratio N,,c / N,,, of the expected totai number of nodes generated by the two algorithms. The problems solved are characterized by parameters p, n, M and r. We consider the case that M = n at two “reasonable” values of p: 0.5 and 0.75. In both cases three I 2/ 4 8 n=r+30 p = 0.5 M = n families of curves are shown, corresponding to three ways In which n varies with the independent variable r. The n = c family: Curves for n = 5 and n = 10 are shown. As expected these curves generally increase with r since when r > n no constraint can be tested before termination of either algorithm Both gBT and gFC then generate the same full search tree of all II&S” Mk nodes - so that Narc / Iv,,, = 1. The n = cr family: Curves for n = lr, 3r/2, 3r and 5r are shown In contrast to the n = c family, these curves generally decrease with r - in other words the relative advantage of gFC compared to gBT becomes even greater as r grows. The n = r + c family: Curves for n = r+O, r+ 1, r+3, r+ 10, r+ 15 and r+30 are shown. This family shows the behavrours of both the above two families of curves. For the smaller c values these curves decrease with r. At larger c values the curves first increase with r but level off at larger r In any case, the ratio m,,c / mgBT stays small and hence the relative advantage of gFC compared to gBT remains large. r p = 0.75 M = n Figure IV-1 Analytic comparison of gFC and gBT for solving pure r-ary clps 295 Note that as P goes to 1, both algorithms will be increasingly unable to find inconsistent search paths and, as above, both wil! generate full search trees with the maximum number of nodes at each level. In this limit - therefore, N arc = $3T and ail curves collapse to be - horizontal at value N,,c / NgBT = 1. This ef feet however requires p very close to 1. Even at p = 0.99 we have found that the advantage of gFC over gBT is often still signif icant. It was however pointed out in [ 131 that c, not N, is the appropriate measure of complexity for these algorithms. For the pure binary case we have in [ 141 used c under model 0 to analytically compare algorithm a2FC against x2BT and n2FC against n2wFC. We expect that the corresponding comparison for the pure r-ary case will soon be possible, once we obtain an expression for the still outstanding c(k) of gFC. ACKNOWLEDGEMENTS Many thanks to my Ph.D. committee - Saul Amarei, Martin Dowd, Marvin Pauli and especially my supervisor William Steiger - who strongly suggested I generalize to arbitrary arity constraints. Without their “encouragement” this paper may never have been written. References IllI I21 c31 141 c51 Fikes, R. E. problems “REF-ARF: A system for solving stated procedures.” Intelligence. 1 ( 1970;527- 120. Artificial Haraiick, R. M. and Shapiro, L. G. “The consistent labeling problem: Part I.” IEEE Trans. Pattern Analysis and Machine (19791 173-184. Intelligence. PAMI- 112 Gaschnig, J., Performance measurement analysis of certain search alfforithms. and PhD dissertation, Dept Computer Sciekze, Carnegie- Mellon U., 1979. Rosenfeld, A., Hummei, R. and Zucker, S. “Scene labeling by relaxation operations.” / EEE Trans. Systems, 420-433. Man and Cybernetics. SMC-6 (1976) Freuder, E. C. “Synthesizing constraint Comm. ACM. 2 1 (1978) 958-966. expressions.” II61 c71 L-81 cg1 Cl01 Cl II Cl21 Cl31 L-141 Cl51 Cl61 r171 Cl81 Garey, M. R. and Johnson, D. S. Computers and Intractability. Freeman, San Francisco, 1979. Haralick, R. M., Davis, L. s. and Rosenfeid, A. “Reduction operations for constraint satrsfaction.” Information Sciences. 14 (I 978) 199-2 19. Haralick, R. M. and Shapiro, L. G. “The consistent labeling problem, Part II.” IEEE Trans. Pattern Analysis and Machine I ntel I igence. PAMI-2:3 ( 1980) 193-203. Gaschnig, J. “Experimental case studies of backtrack vs. Waltz-type vs. assignment problems.” new algorithms for satisficlng In Proc. 2-nd National Conf. Canadian Sot. for Computational Studies of Intelligence. Toronto, Ontario, 1978, Montanari, U. “Networks of constraints fundamental properties and applications to picture processing.” f nformation Sciences. 7 ( 1974) 95- 132. Mackworth, A. K. Relations.” “Consistency in Networks of Artificial Intelligence. 8 (1977) 99-l 18. McGregor, J. J. “Relational consistency algorithms and their application in findlng subgraph and graph isomorphisms.” Information Sciences. 19 ( 1979) 229-250. Haraiick, R. M. and Elliot, G. L search efficiency for “Increasing tree constraint satisfaction problems.” Artificial Intelligence. 14 (I 980) 263-313. Nudel, B. A. algorithms: “Consistent-labeling problems and their expected-complexities and theory- based heuristics.” Artificial lnteliigence. 2 1: 1 and 2 March (1983) Special issue on Search and Heuristics, in memory of John Gaschnig; This issue is also published as a seperate book. Search and Heuristics, North-Holland, Amsterdam 1983. Nudei, B. A. “Consistent-labeling problems and their algorithms.” In Proc. National Conf. Artificial intelligence. Pittsburg, 1982, 128- 132. Nudei, B. A., title to be decided, PhD dissertation, Dept. Computer Science, Rutgers U., 1983, To appear Goiomb, S. W. and Baumert, L. D. “Backtrack programming.” J. Assoc. Computing Machinery. 12 (1965) 516-524. Bitner, J. R. and Reingold, M. “Backtrack pr501gr;rn;ing techniques. n Comm. ACM. 18 (1975) 296
1983
89
287
John C. Kunz* Heuristic Proyr.amming Project Sta~lford University, Stanford CA 94305 II KNOWLEDGE REPRESENTATION l’hc objcctivc of this rcscnrch is to dcmonstratc a methodology for the design and use of a physiological model in a computer program tli,rt suggests medicA decisions. ‘l‘h~ phy~i~~l~~gic~\l model is based 011 first principles and facts of physiology and anatomy, and it includes inf’crcncc rules for analysis of ca~nl relations bctwccn physiological cvcnts. ‘Ille model is used to :!n.?ly/c physiological bchnvior, identify the cffcct!, of ~rbnonn~~litics, suggest appropriate thcr,lpics, and predict tit re4llts of thcr~py. This methodology integrates heuristic knowlctlge traditionally used in 31 tificial intclligcncc programs with mnrhcm~~ticnl knowlcclge traditionally used in In,~tlrcmaticnl modeling programs. In rcc:,ignition of it\ origins in artilicial intclligcncc and rnathcmatical Inoclcling. the svstcm is named Al/MM. This nnncr hricfly introcluccs the Yknowlcdre rcprcscntation and cxamslcs ;f ;he system ininlysis of behavior in the do&n of renal physiology. I OVERVIEW ;\I/!,lM, analysis and explanation of physiological function ultimately on analysis of facts of an,&)mv and facts and pritlclI)Ics 01‘ physioiogy. l’h~~ic;lo$cal principlcs arc cxprcsscd cithcr a\ hclil%tic rules or ;I:, m~thcm~~tical rclationahips describing physical I,?!!“<. i.,r::~l rcl~!trons arc b:lscd on physiologic,!1 principlzs, and they (t~::,(:t ibe po:,4l)lc ch,mgcs in rhc ?t;?tc of ~hc modclccl system. /\l/hlM iiistiliguishce, lwm~n rw kinds of caus,~l rclationc. “l’ypc-1” causal ~cl.lti,:n< arc empirical and arc bnscJ on definitions or on repcatcd “‘ l‘ ypc-2” c,~usal rcl:itionr have ii basis in physicA law, m;~tl:~ri~ntic~ill!;. Infcrcncc rules arc t)roposed for making thr-oii~li \vllich physiological proCcsScScan function. In iw cllrrcnt dom,iin, rcnnl pl~ysiology, Al/MM zm;~lylcs phy4ol:;gic;tl bchGor and explains the rationale for it? an~lyscs. ‘I’hc l)~o!‘,r,lnl fits data to the modci. dccidcs whcthcr the datn arc abno~ mal, ,I!K~ idcntilics tllc ~:os:;iblc effects of any ahnor ninlitics I . ‘I‘hc ~~h~,~ic~l~~gical moricl is hnscd on knowlctigc ‘thout an,ltomv. the I):‘li,l\ ior 01‘ tlX r~l?~~iol~~!:ic,~l SL”tc.i?l. and the mcchanisrn of nc tion of’ II,lrly cxpcrt systems have been built us~n=, lar.~ collections of rrllcs til,lL d;:‘;crihc: cnioirical ,issociations. - - Sornc ;‘cccnt Al work i$ t)ilSCd on explicit rc&scntntion of‘ the strilcturc anti ftmclion of a system. I)nvis (I)&, 19X2), Gcncscrcth (Gcncbercth, 1%2), alld l’atil (Patil, 1981) describe computer programs that arc bnscd on structure and fu;iction. AI/MM differs from thcsc moeram? in its exnlicit rcprcscnt:ltion of principles of ph~~siology, iLs ‘stratcgp hr propag&on of causal rclntions, and its explicit rcprcscntation of the basis for causal relations. AI/MM rcprcscnts anatomical ohjccts, physiological processes, physiological substanccc, parameters, and mechanism< of action of proccnes. Figure 1 illu~tr~~tc’s the way in hhirh thcsc gc:lcric kinds of concepts can refer to thclnsclvcs and to rcl~~tctl concepts. !<nowlcdgc of physical laws is rcprcsentcd as mcchani\mc of~tion, ;~nrl the form of a law is rcprcscntcd as a m~~thcm~~tical formul,~. ‘I’hc ~y~tenl interprets mathematical relations both syn~boIicalIy, to identify constraint relations in inferring causality, and quantitatively, to calculate the value of paramctcrs. Anatomlcal Object Sub,a,ty Parameter ,-- Process J Physiological Procsss \ Par.a”9 Physiologi:al Subseance ::F:i:::i”d IGgurc 1: The AI/hlhl vocnbul,~ry rcp~c~nts five nlair: kinds of physiological concepts. Concepts can rcfcr to other concepts as shown in the diagram. ‘I‘hc Al/MM knowl4gc bnsc currently illcludcs tlcfinitions of at?out 125 c,wccp1:5. Conxp~s currcI:tly hnvc bctwccn 5 and 6.5 indivitlr~,ll fe;ltures; S f‘c,mtrcs per concept is typical. Collccpts rcprescnt two kinds of i)h\/sicc\l ohjccts: anatomical objects, sllch ns the heart, :IXI phy4ologic,rl 5utN,lnccs, such ii4 blood. Ot)j<cts fan have chnl,lctel I\tic pnratnc::cri: Iluid Lollmic, ccmccntr:~tion. prcssurc, llow rntc, L(C. I\tlditiorl,rl c,)nccpts spccirj 111,: t;‘,lturcc of C,IC~ dcfi ncd paramctcr. In &lition, con~cpt~ dcfin< ‘.I’l~!~,~ol(~gic~!I I)Ioccsscs”, OI the rules by which paianicts~ \ c~ii c‘h,inzc ~Jucs. I~in,illy, conccpt~ dcscribc “niccli‘illisnis”, or the physic4 I~:vs ~iitl c,~uj,:l t clations that are the bases for explaining the operation of proccsscs. ‘I‘hc KONCW’L‘ function can bc used to print the definition of any concept in the knowledge base, as shown in I:igure 2. From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. -PCONCEPT(Body) * Typed hy user. . ; At1 excerpt of Ihe AI/MM ; response is show below. Concept name: Body Concspt typo: AnatomicalObject Fluid Volume: BodyWaterVolume Major Fluid: Water Subparts: (Arteries BodyWaterSpace Capillaries GI tleart Kidney Pituitary RestofBody Voins) Fluid Flows into and out of the body: intake WaterIntake loss GIFluidLoss loss Bl oodLoss loss InsonsibleLoss loss UrinoOutput Paramotors: BodyWatorVolums: total body water volumo Anatomical Physiological conn0ctions: process: output world > intake GI \iater intake output Vsins > intake world blrdoding output At-tories > intalc Kidney Ronal psi-fusion . . l Excclpt of the dclinition of Lhc concept by AI/MILI in response to a u:,cr query. of“klly” psitilcd ‘I‘hc knowlcdgc base inclu(lcs infcrcncc rules that NC bi)sCd 011 d definition of the causal rcLltion bctwccn stale. Using the c~sal rclntiou and the knowlcdgc of’ anatoiriy and physiology, the program m:lkcs infcrcn<cs about normal physiological behavior and the causes and clTccts ol’abno~ maI physiological hchnvior. Paramctcrs may bc rclaLcd qunlitativcly or quantitAvcly. Crli~sal rcl::tions ;~rc used to infer lA~ysiologica1 behavior. For cx;~~:lplc, llicsc relations iirc used to infer whcrc fluids can flow within an ;nlatomical nctkvork and to infer the cffcct of chang(:~ in l)l’CssurCs and concentrations on Iluid flows and :‘o!umcs. An 0: dnplc of a catrs,rl relation is shown IXIOW in Figure 3. /\l/hlM IIXS the hII:!; kllowlctl~c rcp~c~nt~iticin syAciri (Gc1k,~,~rclh, IWO). ‘I‘llis cxa~~iplc of ;I rule ih i~~llii;tlly LILl~~slritcd lioin its hlI<S syntax. l’hc causal rule makes the causal relation bctwecn states explicit, SUCI~ ;IS the relation bctwccn a flow and a fluid ciil>acity in this cxamplc. In addition, it explicitly describes the basis for the causal relation. 1 define the basis of a causal relation as its underlying principle. ‘I‘hc basis of a causal relation is used in the explanation of its USC. In AI/MM, the bases of causal relations include widely acccptcd empirical ohscrvations and laws of physics, described rcspcctively as ‘I’ypc- 1 and ‘I’ypc-2 causal relations. “Infectious disease causes fever, according to widely acccptcd clinical observation” is an example of a cauul r&ion with a Type-1 basis in empirical observation of physiological behavior: The association bctwcen infectious disease and fcvcr is widely known, yet the physiological mechanism for this asr,ociatioii is not now widely understood. “Increasing rcsistnncc to flow in an artery causes rcduccd blood flow through that artcry, according to Poiscuillc’s law” is an cxamplc of a causal relation with a Type-2 basis in physical law. IF $AOBJECT is an anatomical object which has a fluid intaks namad $INTAKE, and $CAPt?CITY is tha fluid volume of SAOBJECT, and $ISIITAKE is abnormally increased, and Change in fluid capacity of $AOBJECT equals tho difference bstween its fluid intakes and its fluid losses THEN the increased .fluid intaka of $INTAXE may cause increased fluid capacity of SOBJECT, according to the law of conservation of mass Figure 3: An cxamplc of a arc uninstantiatcd. causal rule. Variables y~cfixcd by “$” The distinction bctwccn Type-1 and ‘l’ypc-2 b~scs has sc~c~al ilnyortant uses. t;irc;t, AI/MM cxpl,lins physiological behavior by describing both the cxistcncc and the infcriccl basis or‘cau~il rcl&ions. ‘I’hc ‘oasis helps to elucidate the n,iturc of the infct led relations. Scconti, ‘I’ypc-2 bases of causal relations are rclatcd to physiological principles, and thus they may have brood npplic&ility in diffcrc!lt don~ains and diffcrcnt contexts v. ithin a domAn. In cor-li;‘T\t l,nowlccl~c bases built soicly with ‘I‘ypc- I cnucal rclnliotls M ill ~p;;l; only i;i Iimitcd spccilic domains and contexts. Al/MM ,icliicLcs soiiic gclicr,ilily by applying IJWS 01‘ physic\ ,111d li~nd,~~nc~~!~~l lirir;ciplcs of phjG0logy ill a uscfiil ;intl uniform way. ‘I bus, tlic ~,IIII~ prlIIciplcs m;ly IX 1iSCflil lilr analyzing 0tllCr d0KlaillS to which thCW pr’iilCl~~lC:j ,lpply. I:inally, the Type-1 / ‘l‘vpc-2 clis,tinction clearly idcntilics kn!)wIc&c ba*;cd on well-undcrstoocl scientific principle from hcuri\iic ~IIOLV!CC~;;C. ‘l‘his di\iii!ction helps to identify promising arcns for liirrhl:r sciz:ltilic rcscarch. Type-1 bnses for c:~usal relations cali hnvc citlicr c;~~<rli!.\!i\ c 01 qiinnlit,ki~c forms, while I ‘ypc-2 lmc:, fix cAllsa rcl<~llo!lc ‘\I \VLI)‘S ll‘!VC quantitLltivc forms. 130th the inl,“ctious discnsc ai:d L!Ic rc:,istaiu cxamplcs quoted abo\c arc cxprcsscd in qualit&ivc form. l’hc basis for the formcr rclatiou is cxprcsscd qualitatively. ‘I’hc basis for tl;c latter is infcrrcd from d matl~cmaticnl rcprescntation of a physical law: ITlow = Pressure / Iicsistance (Poiscuillc’s law, whicll is a version of Oliiti’s law for fluids). A single causal relation may bc used to dcscribc a pal titular behavior. hIuch murc interesting :Im a single clausal relation, howcvcr, is the infcrcncc of the description of a complex physiologicA behavior based on a scqucncc of causal r&ions. Cal~sal relations can be propagntcd through an anatomical network, subject to the constraints imposed by the physiological function of that network. A propugatcd sequence of causal relations dctcrmincs a “mechanism of action”, Which is a concept widely used in physiology and is often rcprcscntcd <IS a diagram wit11 arrows connecting related physiologic;~l states. I dctinc a mechanism of action in Al/MM as the scqucncc of causal relations by which an initial physiological stntc (the GIIISC) /,r()l)ilgatcs through an anatomical network to cause a resultant phyyio!oglcal state (the cffcct). The mechanism of action provides a strong focus-of-nttcntion heuristic for nnalyying a physiological model to clcacribc nnd predict behavior. Problems arc nn~ily~cl in terms of the rcl,ltivcly small number of ‘l’ypc- 1 and ‘l‘ypc-2 relations that can affect pliy+ologicnl behavior at any point in an an~ltomicnl network. In Adition, ,ittcntion to mcchnnism5 of action helps to focus tlic process of acquiring new knowlcdgc about ;I problem. It is usc~‘ul to include knov.lcdgc of a proccs$ in the kno:vicdgc b;~sc if the behavior of the l~roccss CAII be used for inferring the behavior of the modcicd system. In ;Icldition, if a process i\ important, its p,lralrlctcrs ~ntl its causal mechanisms .Irc Jso important for inclusion in the knowlcdgc base. 111 EXAhIPLE. ANALYSIS GF STATIC FACTS -.--L--- _- --_- IV EXAMPLE: c/!USAL ANALYSIS OF EFFECT OF CHANGE --____ In addition to its cvplicitly rcprc\cntcd facts, the A!/Mhl knowlctlgc b<~se has rules for infrrrin? t:,lCtS of physiology and anatomy. ‘ I‘ hrlS, the sy>tcm can infer val~ics !hat are not explicitly rcproscntcd in its knowledge bnsc. Withiit the tr:!dition of :ipplicd AI system, Al/MM is ;I rule-bzctl system. f;or c.r: inlplc, Al/hlM ;1x5 IXICS tll<lt allow the system to infer the identity of Al the ncstcd subparts of an an,ltomical object. Roth the user and the system Can rctricvc the quantitative value of a paramctcr. Dcpcnding on Lhc cituation, \alucs Cdn bc looked up in the d~abasc, Calculated from cxpcctcd default values, and in furl-cd. If tllc J’aticnt \\cight, for cxatn~~l~~, 11~1s beer, awrtccl in tllc d;itabaac and TIC LI\CI’ typCS (VALUE ‘WE SGt IT), till’ \)jicIll looks UP that L~I~IC LUld plint\ it for the user. If the IIX~ types (?J:?LUE ’ (,DEFAULTVALUE Uri 11a0ulput) ), t!lc systcnt dtcrnpts LO infer ;I dclault value for the ctirrcnt qLi.iliLatl\c St,itc 01‘ tllc pnlXlllCtCr. Quantitative piramctcr V/nl:iC$ CiIIl ~ll\O otkn IX CalClll,ttcd from qu:inLitatlvc rclntions. ‘J’hc AJ/ltJi$l knowlcdgc bnsc currently reJ~i’cscnts physioJogical Lonccpts incladin~ the J)rin!-iplc that the whole equals the sum of Lhe imL3, Ohm’s law cxpicsscd as Poisciiillc’s law nnd the St:n+ng hypolhc~is, anr! tl:c law ofcorww 2hn of mas5 expressed siniJ,ly, as the t:iCk prii;cipl2, ,Ind Lhc prillcii)lc; of dilution vo111mc and clcnrancc. Figure 4 shoots a simJ)le cs,?nli~Ic in which OIIC of thcsc methods is inl?rrcd by the system to 1~ J?oll-‘nlially ~~pJ~liCablc. I’his figllrc shows thicc ways Lh,iL Al/Mhl h2il to c~ilcul,ilc lhc c,\liic of a J);\l Licular paramctcr. ‘l‘hc first method is to m.ikc ;I :pccinli;ctl ph)Gological mcasurcment, and the second is to use a rule-of-thumb cstimatc based on dlc patient’s sex and weight. ‘i’hcse two methods arc explicitly included in the knowlcdgc base. The third method is infcrrcd by the system from the anatomical facts of the situation and the general principle that the size of the whole equals the sum of the sizes of rhc parts. When the user simply asks for the value of a J)arnmctcr, as in the example of l:igurc 5, the system first idcntifics alternative calculation m&hods. ‘Ihen it uses a prccctlcncc heuristic to choose the method in the current context that is likely to produce the most accurate value of the specified pnramctcr, that is, dcpcnds least on default values. Finally, having chosen a method, the system calculntcs, reports and asserts the parameter value. * Typed by rhe user. ~(MEASURE 9 (VOLUME BodyVatsrSpace)) ; The response lypcd by the * synett~ is showtl below. 9 To maasure (VOLUME BodyWaterSpace) = BodyWaterVolums, measure one of VDISTRIBUTION D20 Fraction of total body weight assumed 'to be water TIMES patient weight Sum of (volume of intra-cellular fluid space, volume of the extra-cellular fluid space) DcsCription of the mclhods infcrrcd by the system for determining Lhc vnluc of a particul,ir paramctcr. Consider Ihe CA< of a paticilt who I, obscr\ cd to drink 10 liters of fluid in <I ddy (“NOI m‘il” is I .?I - 2.5 litcr\/day). ‘l’lic iiscr reports this value to lhc progl,iirl ,lt.ti ,I\ks LJlz Jji~~~l~i~I~ to into, IIICI the Ggniticnnce of tl~c obscr\atlon. ! igurc 6 bclou ~~O\VC the input by tllc u>cr Ltnd th,: tarp-lc\ cl su1n1n;ii y J~I \)\ idcd by AI/Xl Xl of the cff~ctc, of the spccificd ol:s~~~~~,i~ioll. Subscqi~i~;~L ~tih5cctions itI turn SllliI\ the tlct,lilcd .innlysis by the J>rogr<im of Lllc cuJ)cctcd physiologlcnl bcll:~viol of ihc motlclctl system. * Typed by the user. l(VALUE '(VOLUME BodyWaterSpace)) ; The respome a,ped b}j (he ; systm is showtl below. Valuo of (VOLUME BodyWaterSpace) = BodyWaterVolume = SEX-F TIMES !4EIGHT = Fraction of total body woight assumed to be water TIMES patient weight = .B TIMES 70.0 [according to a Type-l relation rapresontod as a regression equation] = 42.00 [Liters] Figure 5: Computation of a quantitative vnluc of a paramctcr. The systciri infers method, and altcrnativc ways to calculate returns the result with n equaLion used in the Computation. a value, chooses a summary of Lhc intcrmcdiatc-lcvcl slImmary; the heuristic used in preparing the top- lcvcl bumnlary is to summ,lri/c effects ‘It the highest apJ,ropriatc Jcvcl of nn,ttomical dctdil. hlorc del,iilcd nnallsis of cause and cffccts is prcscnted nt a finer level of anatomical d&l. Validation of hypothcsixd relations is an important issue in Causal analysis. ‘I his issue is complicatctl in biological sy\tcnls when each case is difl’crent and the analysis of A model can have only limited ,iCCuracy ill predicting the bclU\, ior of 2 ~nodcIc~i1 4ystcin. ‘1 Ilc A 1/M kvl ;:ppr,).tch to the issue is to I-x <is carClill 25 J~o~;4blc in hyjlOll:cSi/itlg c;il~snl relations and then t:) help the u\cr to v,~l:datc hg J)othcsi/ctl rcl,ltions by identifying cffccts of the given cC~usc that can hc tc+xl in the modclccl system. Iti the :,ccond cx;iml)lc [Figure 61. the syslctn hypoLhcci/cs increasccl II! inc output 2s one raiilt of the inlti.4 ch;lngC io Lhc sy:,tCITl. An clcvalcd urine i)iiLJ~ilt wc;uld Si!ppOl-t the i~ypl~~hesis that ths caux and cffcct arc rcl,~tcd in the qmilicd way in an inditiidiial cdsc. 227 ‘I‘hc rcnsoning shown in this example is typical of /?,I/MM. 'I'lic sy\tcn~ I~C~ISOI~S li~~~w:i~xl fl.om ohscructl c‘~usc to I?) polhc5i/(‘d cl‘l~ct. III the prcscticc ol’cvitl~~c for 2 I~ypollic~iA cl‘l~cl, 1hc \ystcm xsu1iIcs Ill,11 lllc cKccL is prownt !iN Ihc plticnt. ‘l‘hc \,)i\ICll1 then sc;ir(.hcs li)r furLher cffccts of the newly II) pothcsi/4 c,!uTc. I’ropnj;;itlon of cffccts COlltitllics until no flit thcr cl‘l‘ccts arc forind or until ;I ncg:rtivc Ii&back loop is rccogni/cd. I’hc v,ili:lity of Ll~c inl‘crrctl scqucnccs of cvcnts cm bc tc5lcd in the IllOdCictl syjtcm, namely, rlic palicnt. Al/Iv1 M c;~n propose tcL,.t’; to confirm thil p~cscncc 01’ hypotlicsiycd st;itcs. li)r cxamplc, it coi11d ~!!:gcst that tirinc output SllO~llCl IX lIlL‘iISlII’Cd. AI/MM also report5 rior~nal tlici ;ipy goals li)r CXil nbnorm:~l st:\tC in a caXXtle of‘ CffcctS; in the CxAiIlplC, il0ilC WAS found. ; TJpn’ by hi user. -(INTERPRETVALUE 'NatorIntake 10) Water Intake of 10 Liters/day is in ,the range 5 Liters/day < lilater intake <= 20 Liters/day, which is moderately elevated. tYormally, increassd water intake causes increased body water volume. Increased body water volume causes decreasad water intake. In addition, incraasod body sater volume causo3 increasad urino output. Incransad urine output CaUSQS reduced body water volume. Increased urine output normally will continue until body water volume returns to ncrmal. :t4odsrately irlcreassd water intake doss not normally require therapy. Figure 6: Initial top-level summary by AI/MM of the expected cfFccts of a given mcasurcmcnt. function is the overflow dinin in a bathroom sink. When the regular drain is closed. the overflow drLrin will keep the sink from overflowing if thcrc is a structural connection bctwccll the obcrflow-dr,Iin entry and the regular drain pipe and if the function of that ovcrilow pipe is normal -_ for cxamplc, free of potatoes. Normally, increassd I.vater intake causes increased body water volume, according to the effect of the law of conservation of mass. Incraasad body water volume causes reduced renal tubule water parmeability, according to an ampirical Type-l causal relation. Decreased renal tubule watsr permeability causes reduced renal tubule reabsorption, according to the affect of Poiseuille's Law. Decreased renal tubule rscbsorption caus33 increased urine output, according to the effect of the law of consorlJation of mass. Increased urine output causes reduced body water volume, according to the effect of the 1 aw of consarvation of mass. Increasad urins output will continue until body water voluma roturns to normal. Figure 7: Intermcdiatc-lcvcl summary of the scqucncc of effects of high water intake. This summary wets prepared by AI/MM. Each of these conclusions is dcscribcd at a lcvcl of anatomical detail that is intern;edi;itc between the top-level summary and the detailed casc,ldc of effects. A. SeQaence of Causal Events Figure 7 summarizes the cascade of causal inferences made by AI/MM as it infers that incrcascd water intake can cause incrcnsed urine output (named IJrincOutput in the knowlcdgc base). ?‘his summary was prcp:;rcd by the system by describing cnch elcmcnt in the ini’crrcd cnscndc of causal relations. I :ach causal relation in this cascade is infcrrcd bccausc thcrc is a lawful basis for the relation and bccausc thcrc is an anatomical network througii which physiological functions can propagate and allow the cause to product lhc cffcct. Th, the individual relations in &his cnsc::dc arc intcrmcdiatc in detail lcvcl between the top level, shown in Figure 6, and the most dctailcd analysis pcrf~rmed by the ,;yntcm. lhc first r&lion in the cascndc shown in liigurc 7, [or e,~;tnli)le, describc5 the fluid flow from the outside to the g;i~trc,il~tc~tin~ll Cr‘I?‘t (Gl), through the Cl, from th,o GI to tile veins, m.i so on. ‘l’he second relation 111 Ihe cascade describes the cffcct of Antidiurctic hormone. ‘lhc AI/h,! M CnUSill rules will infer that d physiological event, such as an i:lcrcasc in water int:lkc. is ;I possible LYISC of a second cvcnt, such as incrcascd urine outptit. if ;I hypothcsiA c,I:isnl relation has a lcgitimalc iNsis anti is pL\usiblc. ‘I lit b:rsis of a cu.rs~il rclnlion is cithcr a ‘l’ypc-l basis in cnqirical obs:‘r\:;tion or a l‘ypc-2 basis in physical law. A causal rcl:ttion is plausibic il‘ it is not known to bc impossible in a context and if thcrc is an dl\i\t<>mical link by which normal physiological liinction cnn cause didii~c in the tit’9 pnrnmctcr to ]m~p~lg<ltc illld w11sc change in the second par,lmctcr. A ~!~,I:pl;ysiologic~iI c :~nmplc of a~ anatomical link that supports a ‘d CoNcLUSION AI/MM includes modest but nontrivial r;tructur:,l complexity in its rcprcscntation of the dom,jin anatomy. It rcprcscnts bchLILior that is partially understood in terms of ln\cs of physics al:rl hz;ic dctinitions. Use of AJ/MM shows that it is possible to analy,~c the bchnvior of a physiological system based on kno,vleiigc of anatotny. 1?hysiological proccssc~. and Rrst principles of phyciology. In this application, intcgr:ltcti ~42 of symbolic ,Ind qu;rntit,ltivc analysis is rn~~c powerful than cithcr one alvnc. Symbolic knowlcdgc is used to infer both the qu:ilit:ttivc rcl;ltions and tlic mAcm,lticcil constr2inls that rcL)tc the parameters of the modeled system. Thus, symbolic analysis is useful for structuring problems to bc solved in particul,lr cases. Quantitative analysis is nccdcd to actually analyze the qunntitativc behavior of a modclcd system. Quantitative analysis can resolve qualitative ambiguities, and it can provide quantitative estimates of values for parameters that arc not or cannot be measured. The model contains knowledge of anatomy, function, and mechanism. It may bc possible to WC such models to analyze behavior of a broad class of systems similar to that of this project. ‘1’11~ AI techniques allow an integrated representation of all the knowledge included in the model, including its definitions, anatomy, behavior, and mechanisms. Knowledge may bc represented and used in a qualitative or quantitative form, as appropriate. The same inference procedure is used to infer both normal and abnormal behavior. Because the basic inference method is to propagate effects through an anatomical network, the inference procedure wit1 exploit the available information that is relevant, and it ignores irrclcvant infonnation.
1983
9
288
Basmina Pavlin Computer and Infcumation Science Department University of Mtssacbusetts Amherst, Massachusetts, 01003 A model of a distributed knowledge-based system is presented. The model captures the features specific to those systems, such as alternative paths to the solution, utilization of inexact and/or incomplete knowledge and data, dynamic task creation, complex subproblem dependencies and focusing aspect of problem s&ing. The model is applied to the analysis of communication policies in a distributed interpretation system. The result of the analysis is the best policy for the given environment and system conditions. Another use of the model as a real-time simulation tool is suggested. The development and performance-tuning of a knowledgebased system like HEARSAY-II [ZJ is still mostly an art. Ideally, one would like to have a set of equations which relate system’s input, internal structure and output, as in classical control theory. This would allow complete analysis of the system and predict its behavior for any input. Unfortunately, in such complex systems the interaction of m*Y parameters precludes such characterization. We feel that even with a complex system a limited analysis should be attempted, and that this is possible with an appropriate modeling procedure. One of the first attempts at modeling a knowledge-based system was done by Fox [‘I], but his model is too abstract to deal with the phenomenon of subproblem interaction, an important factor in system performance. His model is also limited in applicabiity because it is a static approximation, namely, time relationships among processing elements are not considered. IQ our earlier work on system measures 131, we addressed one aspect of the performancetuning problem: what would be the change in system performance if a component with different characteristics was introduced? Due to the nature of the question, that work concentrated on the model of a component (knowledge source, scheduler), and it relied on system mechanisms for component interaction. This research was sponsored, in part, by the National Science Foundation under Grant MCS-m27 and by the Defense Advanced Research Projects Agency (DOD), monitored by the Office of Naval Research under Contract NR049-041. This work centers around a different question: what would be the change in performance if a different relationship among the components was introduced? We develop a model of the complete system and its environment, in which a pracessing component is relatively simple, and the focus is on the interaction among the components. Since our intention is to model a distributed system, both the interaction among knowledge sources and the interaction among the nodes are considered. In the following sections, we describe the model of a distributed knowledge-based system (IXBS), show an example if its application as an analysis tool, and suggest its application as a real-time simulation tool for these systems. There incorporated are a in any number realistic of features performance that model Qeed to be of a IXBS: 1. The system usuahy works on a single problem, which is divided into many subproblems with complex dependencies, so that allocation of one subproblem can not be considered independent of the others. 2. The solution is derived by employing a limited search, in which there is no full enumeration, but only promising dtematives are explored. Thus, many tasks are created during processing, they are not known before the processing starts. 3. Processing in D-s is often characterized by uncertainty, since input data or missing. may be inaccurate Also, the problem on which the gu;tnis working is often so complex that the methods have identified. been only partially 4. In order to reduce the uncertainty in the quaff, there is often redundancy in the process. It originates either in alternative views of the environment, or in different types of knowledge applied to the same data. Alternative solution paths are formed, and there are many possible tasks involved in the solution of the problem. The focusing problem becomes a crucial aspect of successful processing, and the main issue centers not around the question who will do the work, but if anybody weds to do it at all. 314 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. We build a model starting with a Petri-net representation [5j. The Petri-net formalism has the basic concepts necessary for modeling a distributed system with asynchronous pr cxessing: the notion of events which occur under certain conditions and the ability to represent the asynchronous nature of events. The basic concepts in the model are activites, domains and data units. The correspondence between a Petri-net and a DKBS is shown in Table 1. An uzctivity represents a knowledge application process, while a domain is a part of the environment or the processing space from which the activity takes its input. Data contained in a domain we call a &zta unit. In a HEARSAY-like system, for example, an activity would be a group of knowledge source invocations, a domain would be a part of the area of interest, and a data unit would be a group of hypotheses. Domain boundaries and the scope of an activity are both application dependent. Their choice for a particular apphcation will be shown in the example section. The system configuration is defined as a fourtuple: N=@AJ,O) D-the set of domains A-the set of activities IA to D*- input function OA to D*- output function Input and output functions specify input and output domains for an activity. A state of the network is defined by the placement Let us denote ds a data unit which represents the of data units: solution of the system: P=(q, . ..nJ s-number of domains in D where rq is a number of data units at domain i in D. The system executes by changing its state. The state is changed by performing an activity, which causes data units to be created at its output domains. An activity can be performed only if it is enabled, that is, if it has a data unit in each input domain. The execution ends when a data unit is created in one of the system output domains (those which are not input to any activity). Petri-net interpretation m-m I e I token I data unit I Tabfe 1: The correspondence between a Pet&net and a DKBS system. Interpreted in this way, Petri-nets become a convenient formalism for depicting static relationships among the components in a DKBS, but it lacks a dynamic system characterization The system is more successful if its solution is more accurate, or achieved in a shorter time; a succession of activities which leads to such a solution represents a good allocation strategy. Accuracy and time are then essential characteristics of both activites and data. We depart from basic Petri-nets by augmenting data units with attributes and activities with transition functions. AlsO, in order to capture the focusing aspect of problem solving, we define execution rules, g which among the possible tasks will be performed. The tune required to perform an activity is a function of the amount of work that needs to be done. We have chosen the concept of volume as the simplest estimate of that amount of work. Thus, we define a data unit as a triple: d=(w,O v-the volume of data a-the accuracy of data t-the time of arrival of data. The value of each of these attributes in the model is an estimate that needs to be obtained from the real system by some sampling process. In a HEARSAY-like system, for example, the volume is an estimate of the number of hypotheses, the accuracy is their belief, and an estimate of the arrival of data is the time atribute. ds=(vs,as,ts). In general, a higher accuracy will be achieved by combining more independent views on the problem, at the expense of longer solution time. Consequently, the objective of the system represents a trade-off between these two 0ppcGng requirements, and we define the performance evaluation function to be the ratio of the accuracy and the elapsed time: d=adts. An activity is seen as performing three functions: fv, fa and ft on the attributes of input data. Let input data units be denoted: i=l,...p and the output data unit: d=(v,a,t). Then the functions of an activity can be represented as follows: v=fi(vl,...,v*) a=fa(al,...+Q t=ft(v1,..., v*, q,..., t*). 315 The fv, volume transition function, determines the volume of the output based on the volumes of inputs. The fa, accuracy transition function, determines the accuracy of the output based on the accuracies of inputs. The ft, or time transition function, determines the time of creation of output data. The output time depends both on input times and the volume of data. An activity is performed if it has the highest priority among enabled activities (those which have data in their input domains). The execution rule specifies this priority relationship. A critical factor in the model’s applicability is determination of the transition functions used by the model. They may be hard to determine accurately, especially if the intention is to use the model in the design phase, when a working system is not available. However, the functions can be stated in rather general form (as will be shown in the example) and fine tur.ted in the verification phase. As an example application, let us consider the use of the model in determining appropriate commuuication strategies for the distributed vehicle monitoring testbed [4]. The testbed simulates a distributed interpretation system whose goal is to create a dynamic map of vehicles moving through the system’s environment. Vehicles emit acoustic signals which are identified and roughly located by sensors. Sensors report this information to nearby nodes. Every node is an architecturally complete HEARSAY-II system. In order to create a map, every vehicle or a formation of vehicles (pattern) has to be identified, located and tracked. A vehicle is identified by a number of groups, corresponding to its different acoustic sources (engine, fan). Groups correspond to signals related by the same harmonic frequency. Thus, four levels of abstraction can be identified in the solution process: signal, group, vehicle and pattern. For this example, we assume that signal tracks are formed first, and then combined into tracks on higher abstraction levels. Let us consider the system with two nodes which partially overlap in their input domains. For this example, we consider only single vehicle formations moving in one direction. The nodes are positioned along that direction, so that node 1 receives input data first. The solution is to be formed at node 2. It is then appropriate that node 1 should send information to node 2. We want, with the help of the model, to answer the following questions: 1. What type of information should be communicated: exclusive shared (non-overlapping), (overla ing), or all (overlapping and non-overlapping 3 Pp 2. Should the information be communicated on a low level of abstraction (group) or on a high level (pattern)? Sii possible configurations, coresponding to different combinations of communication level and communicated information, will be examined: a. b. C. d. e. f. For 1. 2. 3. 4. Communication of non-overlapping information, on a low level. Communication of non-overlapping information, on a high level. Communication of all information on a low level. Communication of non-overlapping information on a low level, overlapping information on a high level. Communication of overlapping information on a low level, overlapping information on a high level. Communication of all information on a high level. this problem we define four types of activities: Synthesis (S) whose results are data on higher abstraction level. Merging (M) whose results are data of a larger scope (longer tracks). Unification (LJ) which combines different views of the same events. Communication (C) which moves data from one node to the other. The transition functions are based on the observations of the testbed behavior. Their definition is summarized in Table 2. The execution rule used in the simulation assumes, the following priority relation when more than one activity is enabled: communication Vi Ai Ti+tc Table 2: Definition of translffon fonsffons. fv-volume transition functisn fa-accuracy transition function f&time transition function di-input data unit, di=(Vi,Ai,Ti) Cs,Cm-knowledge power constants Cs=Cm=l *time to process unit volume, tp=l tc-time to communicate unit volume, tc=l 316 We define four input data units: 1. dll is the input from the domain exclusive to node 1. 2. d12 is the input that node P collects from the overlapping domain. 3. d21 is the input that node 2 collects from the overlapping domain. 4. d22 is the input from the domain exclusive to node 2. Figure 1: Configurathn a. The model is simulated for the following data definition: dll=(2, 0.6, 0) dl2=(3, 0.6, 0) d21=(3, 0.4, 4) d22=(2, 0.4, 4) Figures 1 to 6 show alI the activities performed in each configuration with a given execution rule, before the solution. is reached. The input domains are marked by Figure 3: Configuration s. Figure 2: Confignration b. 317 Figure 4: Configuration d. incoming arrows, the output domain is marked by an outgoing arrow. Performance is judged by comparing the values of the objective function, J, for different configurations. The values of the solution accuracy (on zero to one scale),solution time (in number of system cycles), and tbe objective function obtained by model simulation and coresponding experiments in the testbed are shown in Figures 7 to 9. Both the simulation and the experiments show configuration f to be the best. Furthermore, the ordering of configurations is preserved in the simulation results, serving as a limited verification of the model. f F&we 5: Configur&ioa e. An important step towards the control of complex systems is the analysis of the relation between the environment, system structure and the performance. We have devised an approach in which a limited analysis of one aspect of that problem is possible: finding a best class of communication policies for a distributed interpretation system operating in a simplified environment. The approach may prove useful outside simple and structured environments amenable to analysis. We believe, (based on initial results) that the policies are relatively robust to limited changes in environmental conditions, so that the results also hold for more realistic environments similar to the analyzed ones. Also, the results of the analysis can be transfered to more complex environments, if .6 a b c d e f Figure 7: !Mtdion assurasy. 45, 35, 25, a b c d e f Fignre 8: !Solotion time. a b c d e f Figure 9: Qbjective fonstfon. Figure 6: ConfIgnrsl&Hs~ f. 316 they can be environments. treated as a combination of the simple Another use of the model is as a system stand-in. A system supplied with the results of the analysis has a stored sommunication policy for an analyzed environment; a system supplied with the model and confronted with a novel environment can simulate a number of communication structures. Although without the analysis benefits of complete search and a global optimum, this may be a very useful guide in the choice of communication policies. ACKNOWL,EDGMENTS I am grateful to Daniel D. Corkill and Victor R. Lesser whose careful reading and helpful suggestions led to an improved version of the original manuscript. Mark S.Fox “Organizational Structuring: Designing Large Complex Software.” Technical Report, Department of Computer Science, CarnegieMellon University, Pit&burg, Pennsylvania, December 1979. Victor R. Lesser and Lee D. Erman “A Retrospective View of the Hearsay-II Architecture.” In Proc. IJCAI-77, pp. 790-800. Victor RLesser, Scott Reed and Jasmina Pavlin “Quantifying and Simulating the Behavior of Knowledge-based Interpretation Systems.” In Proc. IJCAI-1979, pp. 111-113. Victor Lesser, Daniel Corkill, Jasmina Pavlin, Larry Lefkowitz, Eva Hudlicka, Richard Brooks, and Scott Reed “A high-level simulation testbed for cooperative distributed problem solving.” Proceedings of the Third International Conference on Distributed Computer Systems, pages 341-349, October 1982. 5. James L. Peterson REFERENCES ‘Petri Net Theory And Modeling of systems.” Prentice-Hall Inc., Englewood Cliffs, NJ. 07632, 1981. 319
1983
90
289
NON-MINTMAX SEARCH SI’RAmGIES FOR USE AGAINST FALLIBIX OPPOmS Andrew I,. Reibman Bruce W. BaIlard Computer Science Dept. Duke University Durham, NC 27706 ABSTRACT Most previous research on the use of search for minimax game playing has focused on improving search efficiency rather than on better utilizing available information. In a previous paper we developed models of imperfect opponent play based on a notion we call playing strength. In this paper, we use the insights acquired in our study of imperfect play and ideas expressed in papers by Slagle and Dixon, Ballard, Nau, and Pearl to develop alternatives to the conventional minimax strategy. We demonstrate that, in particular situations, against both perfect and imperfect opponents, our strategy yields an improvement comparable to or exceeding that provided by an additional ply of search. I. INTRODUCTION Any two-player, zero-sum, perfect information game can be represented as a minimax game tree, where the root of the tree denotes the initial game situation and the children of a node represent the results of the moves which could be made from that node. Most previous research on search for minimax game playing has focused on improving search efficiency. Results of this type improve the quality of player decision making by providing more relevant information. In contrast, our research focuses on better utilizing information rather than searching for more. In this paper, we summarize previous work on this issue, describe a new approach based on a model of opponent fallibility, and provide and discuss our results. In particular, we have devised a modification of the *-minimax search procedure for tree containing chance nodes (Ballard [82,83]) to improve the overall performance of the minimax backup search algorithm. We shall demonstrate that, in particular situations, against perfect and imperfect opponents, our strategy yields an improvement comparable to or exceeding that provided by an additional ply of search. In the examples appearing below, we follow convention and call the two players “Max” and “Min” and use “+‘I to denote nodes where Max moves and “-” to represent similar nodes for Min. Positive endgame (leaf) values denote positive payoffs for Max. Readers unfamiliar with the conventional minimax backup search and decision procedure should refer to Nilsson [60]. This work has been supported by the Air Force Office of Scientific Research under grant AFOSR-81-0221. II. PREVIOUS WORK ON PROBLEMS WITH MINIMAX Given perfect play by our opponent, we know from game theory that a conventional minimax strategy which searches the entire game tree yields the highest possible payoff. However, most actual players, whether human or machine, lack the conditions needed to insure optimal play. In particular, because the trees of many games are very deep, and tree size grows exponentially with depth, a complete search of most real game trees is computationally intractable. In these instances, static evaluation functions and other heuristic techniques are employed to reduce the search used in making decisions. Before presenting our current work, we discuss previous efforts to deal with incomplete search and imperfect opponents. A. Compensating for incomplete search During the middle to late 1960’s, James Slagle and his associates sought to improve the performance of minimax backup by attempting to predict the expected value of (D+l)-level minimax search with only a D-level search (Slagle and Dixon [70]). Their strategy was called the “M and N procedure” and determined the value of a Max node from its M best children and the value of a Min node from its N best children. The M and N procedure is based on the notion that the expected backed-up value of a node is likely to difler from the expected backed-up value of its best child. From empirical data, they defined a “bonus function” to be added to the static value of the best looking child of a node, hoping that this would lead to a better estimate of the true value of the parent. Using the game of “Kalah”, they found that M and N yields an improvement in the expected outcome of the game about 13% as great as does an additional ply of search. B. Modeling Imperfect Opponent Play In Reibman and Ballard [83] we introduced general rules for constructing a model for an imperfect player based on a notion we call playing strength. Intuitively, playing strength is an indication of how well a player can be expected to do in actual competition, rather than against a theoretical perfect opponent. In our previous work, we also presented a model of an imperfect opponent based on a fixed probability of player error. The simulated imperfect Min player chose the best available move a fixed percentage of the time; otherwise Min chose another of the available moves. Thus the expected value of an imperfect opponent’s “-” node was considered to be the value of its best child plus a fixed fraction of the value of any other children. Though it failed to consider the relative differences 338 From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. between the values of moves, this simple model was found to be better for use in our study than conventional minimax. The reader may have noticed in the preceding section a resemblance between the notion of a bonus function and our attempt to accurately predict the expected value of moves made by a fallible opponent. In Ballard and Reibman [83b], we prove that in a simplified form of the model we present below, with a fixed probability of opponent error, our strategy can be obtained by an appropriate form of M and N (and vice versa), although the exact backed up values being determined will differ. This is because Slagle and Dixon’s bonus function was approximately linear, while ours, based on the arc-sum tree model we use below, is a 4-th degree polynomial. -HI. THE UNRESOLVED PROBLEM OF OPPONENT FALLIBILITY In addition to having an inability to completely search actual game trees, actual implementations of minimax assume perfect play by their opponent. However, this assumption often is overly conservative and can be detrimental to good play. We now present two general classes of situations where minimax’s perfect opponent assumption leads to sub-optimal play. A. Forced Losses and Breakhg Ties The first problem with minimax that we consider is its inability to “break ties” between nodes which, though they have the same backed-up value, actually have different expected results. A readily observable example of this problem is found in forced loss situations. In the two-valued game in Figure 1, Max is faced with a forced loss. Regardless of the move Max makes at the ‘I+” node, if Min plays correctly Max will always lose. Following the conventional minimax strategy, Max would play randomly, picking either subtree with equal frequency. Suppose, however, that there is a nonzero probability that Min will play incorrectly. For illustration, assume Min makes an incorrect move 10% of the time. Then if Max moves randomly, the expected outcome of the game is .5(O) + .5(.9+0 + .l*l) = .05. If Max knows that, on occasion, Min will move incorrectly, this knowledge can be used to improve the expected payoff from the game. Specifically, Max can regard each “-I’ node as a “chance node” similar to those that represent chance events such as dice rolls in non-minimax games. (Ballard [82,83] gives algorithms suited to this broader class of “*-minimax” games.) Thus Max evaluates “-” by computing a weighted average of its children, based on their conjectured probabilities of being chosen by Min, rather than by finding just the minimum. Following this strategy, Max converts the pure minimax tree of Figure 1 into the *-minimax tree also shown, and determines the values of the children of the root as 0 and 0.1. The rightmost branch of the game tree is selected because it now has the higher backed-up value. In terms of expected payoff, (which is computed as O*(O) + 1.0*(.9*0 + .1*1) = O.l), this is clearly an improvement over standard minimax play. Furthermore, this strategy is an improvement over minimax in forced loss situations regardless of the particular probability that Min will err. tmkql /+\ lzakup + txxhp / \ hackup value=0 / \ MlUe=c value=0 / \ value=.1 + + /\ /\ p=.5 / \ F.5 p=.l / \ p=.9 00 10 00 10 Figure 1 : In an attempt to salvage a forced loss situation the minimax tree on the left is converted to the +-minimax tree on the right. Our observed improvement in forced loss situations is a specific example of “tie-breaking”, where the equal grandchild values happen to be zero. Because minimax uses only information provided by the extreme-valued children of a node, positions with different expected results often appear equivalent to minimax. Variant strategies can thus improve performance by breaking ties with information minimax obtains but does not use. B. Exploiting Our Opponent’s Potential For Error By always assuming its opponent is a minimax player, minimax misses another class of opportunities to improve its expected performance, although less obvious than the forced loss situation presented above. An example is found in Figure 2. Assume as above that Min makes the correct move with probability .9. If Max uses the conventional backup strategy and chooses the left node, the expected outcome of the game is 2.1. If, however, we recognize our opponent’s fallibility and convert the Min nodes to “*‘s”, (as in Figure 2), we must choose the right branch and the game’s expected result increases to 2.9. Thus by altering the way we back up values to our opponent’s nodes in the game tree, we can improve our expected performance against an imperfect opponent. backup /+\ b=m + baclaq, / 1 backup vallE& 1 \ val-afF1 valuec2.1 / \ valuE+Z.9 * - + /\ If. jF.S/\p=.l p=.9/\p=.l 23 1 20 23 120 Figure 2 : By converting the minimax tree on the left to the *-minimax tree on the right we may capitalize on our opponents potential for error. In the example of a forced loss, the improvement in performance was due to the ability of a weighted average backup scheme to correctly choose between moves which appear equal to conventional minimax. In the second example, our variant backup yielded a “radical difference” from minimax, a choice of move which differed not because of “tie-breaking”, but because differing backup strategies produced distinct choices of which available move is correct. IV. A NEW MODEL FOR IMPERFECT PLAY Having observed an opportunity to profit by exploiting errors which might be made by our opponent, we have formulated a more sophisticated model of an imperfect opponent than was previously considered. We will first provide the motivation for our enhancements and then describe the details of the imperfect player model used in the remainder of the paper. A. Motivation for a Noise-based Model In general, it should be fairly easy to differentiate between moves whose values differ greatly. However, if two moves have approximately the same value, it could be a more difficult task to choose between them. The strength of a player is, in part, his ability to choose the correct move from a range of alternatives. Playing strength can therefore correspond to a “range of discernment”, the ability of a player to determine the relative quality of moves. An inability to distinguish between moves with radically different expected outcomes could have drastic consequences, while similar difficulties with moves of almost equal expected payoff should, on the average, have less effect on a player’s overall performance. We model players of various strengths by adding noise to the information they use for decision making. A player with noiseless move evaluation is a perfect opponent, while a player with an infinite amount of noise injected into its evaluation plays randomly, We introduce noise at the top of an imperfect player’s search tree in an amount inversely proportional to the player’s strength. B. The Noise-based Model in Detail We now describe the details of our imperfect player model. Each imperfect Min player is assigned a playing strength. Tn simulating actual games, the imperfect Min player conducts a conventional minimax backup search to approximate the actual value of each child of the current position. The backed-up values of each child are then normalized with respect to the range of possible backed-up values and a random number, chosen from the uniform distribution 0 <= x <= S, (where S is inversely related to the player’s strength), is added to the normalized value of each child. Thus the lower a player’s strength the higher the average magnitude of the noise generated. The true node value with noise added is then treated as a conventional backed-up value. We add the noise to the top of a player‘s search tree because the actual effect of adding noise to the top of the tree can be studied analytically while the effect of introducing noise in the leaves is less well understood (Nau [f30,82]). As described in Reibman and Ballard [83], we have verified that, in our noise- based model, decision quality degrades monotonically with respect to increases in the magnitude of the noise added to the conventional backed-up value. V. A STRATEGY FOR USE AGAINST IMPERFECT OPPONENTS We now present a strategy for use against imperfect opponents. We have based this strategy on the *-minimax search algorithms for trees containing chance nodes in order to compensate for the probabilistic behavior of fallible opponent. Three main assumptions are used as a foundation: (1) Against a Mm player assumed to be perfect, we should use a conventional Max strategy. (2) Against an opponent who plays randomly, we should evaluate ‘I-” nodes by taking an unweighted average of the values of their children. (3) In general, against imperfect players, we should evaluate I’-” nodes by taking a weighted average of the values of their children, deriving the appropriate probabilities for computing this average from an estimate of our opponents playing strength. In an attempt to predict the moves of our imperfect opponent, we assign our opponent a predicted strength, denoted PS, between 0 and 1. To determine the value of I’-” nodes directly below the root, our predictive strategy searches and backs up values to the @‘+‘@ nodes directly below each I’-” node using conventional minimax. A “-I’ node with branching factor Br is then evaluated by first sorting the values of its children in increasing order, then taking a weighted average using probabilities PS, PS)**(Br-1) * PS. (l-Ps)*Ps,...,( l- If PS=l, we consider only the minimum-valued child of a ‘I-” node, in effect predicting that our opponent is perfect. At the other extreme, as PS approaches 0, a random opponent is predicted and, since the probabilities used to compute the weighted average become equal, the Min node is evaluated by an unweighted average of its children. How well our model predicts the moves of an imperfect opponent should be reflected in our strategy’s actual performance against such a player. Vi. AN EIWIRKXL ANALYSIS OF THE PREDICTIVE fjTRATEGY In Reibman and Ballard [83] we conducted an empirical analysis to investigate the correlation between playing strength as defined in our model and performance in actual competition. We now conduct an empirical study to compare the performance of our predictive algorithm with that of conventional rninimax backup. We conduct our trials with complete n-ary game trees generated as functions of three parameters: D denotes the depth of the tree in ply, Br the branching factor, and V, the maximum allowable “arc value”. In our study we assign values to the leaves of the game tree by growing the tree in a top-down fashion (Puller, et at [73]). Every arc in the tree is independently assigned a random integer chosen from the uniform distribution between 0 and V. The value of each leaf is then the sum of arcs leading to it from the root. The portion of our study presented here consists of several identical sets of 5000 randomly generated game trees with Br=4, D=5, and V=lO. Against seven 2- ply Min opponents, ranging from pure minimax to almost random play, we pit conventional minimax players searching l-, 2-, and 3-ply, and 10 predictive players, each with a 2-ply search and a PS chosen from between .1 and .9. The results of this experiment are found in Table 1. Before summarizing our observations, we note that the numbers in Table 1 represent points on a continuum; they indicate general trends but do not convey the entire spectrum of values which lie between the points we have considered. In the first column of Table 1, we observe that, though it might be expected that pure Max backup would be the optimum strategy against conventional Edin, several of our predictive players perform bett-2r Table 1 Empirical Study Results Trials=5000, Br=4, D=5, Game Values O-50 Average payoff over all games Max’s Strategy 0.00 l-ply minimax 27.23 30.60 32.34 33.13 33.62 34.14 34.58 2-ply minimax 20.15 31.29 32.90 33.46 33.90 34.47 34.76 3-ply minimax 28.98 32.05 33.36 33.96 34.31 34.65 35.01 2-ply PS = 0.9 28.2 1 31.40 33.03 33.62 34.03 34.58 34.9 1 Z-ply PS = 0.8 28.2 1 31.40 33.03 33.62 34.03 34.58 34.92 Z-ply PS = 0.7 28.21 31.40 33.02 33.62 34.04 34.58 34.92 2-ply PS = 0.6 28.2 1 31.40 33.02 33.62 34.05 34.60 34.95 2-ply PS = 0.5 28.20 31.41 33.05 33.66 34.10 34.66 35.00 2-ply PS = 0.4 28.20 31.42 33.10 33.70 34.15 34.72 35.07 2-ply PS = 0.3 28.17 31.41 33.11 33.75 34.19 34.99 35.12 2-ply PS = 0.2 28.13 31.40 33.13 33.77 34.22 34.83 35.16 2-ply PS = 0.1 28.08 31.39 33.14 33.79 34.24 34.85 35.19 Imperfect Player Noise Range 0.25 0.50 0.75 1.00 2.00 6.00 than a conventional Max player searching the same number of ply. The observed improuement is as much as 7% of the gain we would expect from adding an additional ply of search to the conventional Max strategy. This result is analogous to that obtained with Slagle and Dixon’s M and N strategy. Like M and N, our improvement is due, at least in part, to a strategy which, by considering information from more than one child of a node, partially compensates for a search which fails to reach the leaves. In the central columns of Table 1, we see that against an opponent whose play is imperfect, our strategy can provide almost half the expected improvement given by adding an additional ply of search to the conventional Max strategy. We believe this gain is due primarily to the ability of our strategy to capitalize on our opponent’s potential for errors. If we examine the results in the last two columns of Table 1, we observe that, against a random player, our strategy yields an improvement up to twice that yielded by an additional ply of search. As the predicted strength of our opponent goes down, our predictions of our opponent’s moves become more a simple average of the alternatives available to him than a minimax backup. We have previously conjectured that the most accurate prediction of the results of random play is such a weighted average and, as expected, our strategy’s performance continues to improve dramatically as the predicted strength decreases. We also observe a possible drawback to the indiscriminate use of our strategy. When we begin to overestimate our opponents fallibility, our performance degrades. In Column 1 of Table 1, our performance peaks. If we inaccurately overestimate the weakness of our opponent, our performance declines and eventually falls below that of minimax. We have observed similar declines in other columns as we let the predicted strength move even closer to 0 than the minimum predicted strengths shown in Table 1. Having derived the results given above, we decided to tabulate the maximum improvement our strategy achieves over minimax. This summary is found in Table 2. We also give the results (in Table 2) of statistical confidence tests we have applied to our empirical analysis. These tests help to assess whether our strategy actually performed better than minimax. The percentages indicate the level of confidence that the improvements observed were not due to chance (given the actual normal distribution of our sample of trees). We note that in all but the first two columns our confidence levels are well over 90%. VII. CONCLUSION In this paper we have discussed the problem of adapting game playing strategies to deal with imperfect opponents. We first observed that, against a fallible adversary, the conventional minimax backup strategy does not always choose the move which yields the best expected payoff. To investigate ways of improving rninimax, we formulated a general model of an imperfect adversary using the concept of “playing strength”. We then proposed an alternative game playing strategy which capitalizes on its opponents potential for error. An empirical study was conducted to compare the performance of our strategy with that of minimax. Even against perfect opponents, our strategy showed a marginal improvement over minimax and, in some other cases, great increases in performance were observed. We have presented some results of our efforts to develop variant minimax strategies that improve performance of game players in actual competition. Our present and future research includes a continued effort to expand and generalize our models of play, our predictive strategy, and the assessment of opponents iTsing a playing strength measure. Further study of our models has included not only additional empirical experiments but also closed-form analysis of some closely related game tree search problems. We hope to eventually acquire a unified understanding of several Table 2 Statistical Analysis of Empirical Study X Predictive play % % improvement over 2-ply minimax (in % of l-ply) Statistical Confidence: Is our optimum expected payoff better than that of Z-ply minimax? Imperfect Player Noise Range 0.00 0.25 0.50 0.75 1.00 2.00 7.2% 17.1% 52.1% 66.0% 82.9% 211.0% 58.2% 82.9% 98.2% 99.8% 99.8% 99.8% distinct problems with minimax in order to develop a more general game playing procedure which retains the strong points of minimax while correcting its perceived inadequacies. [I] Ballard, B. A Search Procedure for Perfect Information Games of Chance: Its Formulation and Analysis. Proceedings of AAAI-82, August 1982. [Z] Ballard, B. The *-Minimax Search Procedure for Trees Containing Chance Nodes. Artificial Intelligence, to appear. [3] Ballard, B. and A. Reibman, “What’s Wrong with ,Minimax?“. 1983 Conf. on Artificial Jdelligence, Oakland University Rochester, Michigan, April, 1983. [4] Ballard, B. and A. Reibman, “Experience with Non- Minimax Search Criteria for Minimax Trees”. Technical Report in preparation, to be submitted for publication, 1983. [5] Fuller, S., Gaschnig, J. and J. Gillogly. An analysis of th alpha-beta pruning algorithm. Dept. of Computer Science Report, Carnegie-Mellon University, July 1973. [6] Nau, D. Quality of Decision Versus Depth of Search in Game Trees: A summary of Results. first Annual National Conference on Artificial Intelligence, August 1980. [7] Nau, D. Pathology on Game Trees Revisited and an Alternative to Minimaxing. Technical Report, Dept. of Computer Science, University of Maryland, Jan. 1982. [9] Pearl, J. On the Nature of Patholo,, in Game Search. Technical Report UCLA-ENG-CSL-82- 17, School of Engineering and Applied Science, University of California, Los Angeles, Jan. 1982. [lo] Reibman, A. and B. Ballard. Competition Against Fallible Opponents, 21st Southeast Region ACM Conference, Durham, North Carolina, April 1983. [ll] Slagle, R. and J. Dixon. Experiments with the M & N Tree-Searching Program. Communications of the ACM, March 1970, 13 (3), 147-153 . [8] Nilsson, N. Principles of Artificial EnteLLigence. Tioga, Palo Alto, 1980. 342
1983
91
290
A THEORY OF GAME TREE$ Chun-Hung Tzeng and Paul W. Purdom, Jr. Computer Science Department Indiana University Bloomington, IN 47405 ABSTRACT A theory of heuristic game tree search and evalua- tion functions for estimating minimax values is developed. The result is quite different from the tradi- tional minimsx approach to game playing, and it leads to product-propagation rules for backing up values when subpositions in the game are independent. In this theory Nau’s paradox is avoided and deeper searching leads to better moves if one has reasonable evaluation functions. I INTRODUCTION For game-searching methodology, Nau (Nau, 1980) recently showed that the minimax algorithm can degrade the information provided by a static evaluation function. For the games developed by Pearl, this pathology also arises (Nau, 1981; and Pearl, 1982). Pearl (Pearl, 1982) suggested why it happens. The minimax algorithm finds the minimaz of e&n&es instead of estimating a minimas value. He also sug- gested one should consider product-propagation rules in order to estimate a minimax value. Nau fNau. 1983) investiga.ted this method experimentally and fo&d that for Pearl’s game it made correct moves more often than minimaxing did. For a different class of games (Nau’s games), however, each method made approximately the same number of correct moves. This paper introduces a mathematical theory of a heuristic game tree search. For this purpose a game model and a search model are constructed. We assume that the result from the game is win or loss and that draws are not permitted. The values (1 for a win of MAX and 0 for a win of MN) at the leaves of the corresponding game tree are assigned probabilistically. Improved visibility of the search can be achieved by assuring that the information given by the search of a level can be retrieved from the information obtained by the search of a deeper level. This theory applies to Pearl’s game and Nau’s game, but it does not strictly apply to most games. Three particular results are derived in this theory. First, the exact way to estimate a minimax value is to find the conditional probability of a forced win, given the information from t,he search. Second, if this condi- tional probability is used to evaluate moves then the result from a deeper search is on the average more accurate. Third, if the positions on the search frontier are independent (as in Pearl’s game), then this estimate is obtained by using the product-propagation rules sug- gested by Pearl (Pearl, 1981 and 1982). In using minimax values as criteria of decision making, if we assume that after the next move both players make perfect plays according to the minimax values, then our estimate becomes the conditional pro- bability of winning the whole game, and the theory leads to the best move in the situation where limited depth search is used to select the first move but perfect infor- mation is used for the remaining moves. Thus this theory is more realistic than the minimax theory, which should assume that perfect information is used at every move, but less realistic than a theory that recognizes that both sides mostly make their moves with imperfect information. For more realistic game playing, the evaluation function should estimate a win of the whole game at a node instead of the minimax value (i.e., a forced win). The minimax theory is based on the assumption that the value of any position at each move is always the minimax value of the position. Therefore, the evalua- tion at a node doesn’t change for each move. The func- tion in this paper assumes that there will be a major change after the first step. On the first move, the value is estimated from a limited depth search. On all later moves, the value becomes the minimax value. In realis- tic game playing, the estimate (of winning the whole game) at a position should change less drastically on most moves. The value at each node is to be estimated from a search, but the search usually goes deeper at each step. In terms of how much the value of a position changes from one move to the next, the realistic situa- tion should be intermediate between the assumptions that are implicit in the minimax theory and the assumptions in this paper. *This work was supported in part b the National Sci- ence Foundation under grant number M S 7900110. cy From: AAAI-83 Proceedings. Copyright ©1983, AAAI (www.aaai.org). All rights reserved. II AN EXAMPLE III PROBABILISTIC MODELS 1980) considered a special kind of games. A L Pearl, earl’s For purposes of theoretical study, Pearl game is represented by a complete uniform game tree with a branching factor d (2 2 , where the terminal nodes may assume I. ja win for LAX ) or 0 (a win for MIN) with a probablhty of p and 1 - p, respectively. Nau (Nau, 1983) considered a different kind of games. A Nau’s game also has a complete uniform game tree. To assign LOSS or WIN at terminals, each arc of the game tree is independently given the value 1 or -1 with a probability of p and 1 - p, respectively. Then the strength of a terminal is defined as the sum of the arc values on the path from the terminal to the root. The terminal is given WIN if its strength is positive, and LOSS otherwise. To illustrate the idea of the new search model, let us consider an uniform binary game tree of four levels, where the terminal nodes may have independently the value 1 (win for MAX) bability of p (0 < or 0 (loss for MAX) with a pro- The space 0 = (0, 1 P < 1) and 1 - p, respectively. 8 of all leaf-patterns thus becomes a probability space. Assume that the heuristic search finds the number of winning leaves under the searched node. We associate each value k (k = 0, . . . ,8) at the root with the event Ek = {(Xl, * * . ,x8) E n 1 C;8,r X~ = k}. R is divided into 9 different such events, which form a partition of R. Similarly, the searched values i and j (0 5 i, j < 4) at the nodes of level 2 are associated with the eves All these sets form a finer partition of 0: EI, = Ui+j=k E;’ . The events given by the search of level 3 are of the form E.... = t 1121314 Xl + 52 = 2.1, x3 + x4 = i,, x5 + x6 = i3, x7 + x8 = i4} g L i,, i,, is, i4 < 2) and form the finest partition of . Eij = (Ji,+ i2=i, i3+ i4=jEi,i2i3i4- Eili2i3i4 C Eij C Eh The relation (i, + i, = i, i3 + i, = j, and i+ j =k) means that the information from a deeper level is more accurate? and the improved visibility is thus formalized in this simple way. This formulation also holds for the general Pearl’s games and Nau’s games. Consider a game tree T with lz leaves and h levels. Without loss of generality, we assume that all game trees discussed in this paper have their leaves on the same level. All leaf-patterns CJ = (xi, . . . , x~), xi = 0 or I, form the space 0 = (0, I}“. If the values on the leaves are assigned according to a probability measure P on 0 w.r.t. the total Bore1 field (Chung, 1974) F (i.e., say that the collection of all subsets of St), then we these games are in a probabilistic model: Definition I. A probabilistic game model for a game tree T with h levels and k leaves is a pair (0, P), where fl = (0, I}” and P is a probability measure on R w.r. t. the total Bore1 field. Definition 2. Let ($2, P) be a probabilistic game model for a game tree with h levels and k leaves. Then a search 5’ on this model is probabilistic if S consists of an increasing sequence of Bore1 fields where F, = (4, fl} I 1 is the trivial Bore1 field and each Fi 2 i < h) is generated by a partition of fl. For each euj-pattern w E St, the search S at level i determines the event E of the partition generating Fi such that UEEEFi,i=l,..., h. The increasing sequence (3.1) is the improved visi- bility of the search. In our example, F, is generated by all events Eh, F, by all E;~‘s, and all Ei,i2i3i4’s generate F,. For a general Pearl’s game or a Nau’s game (Nau, 1983), the heuristic search that finds the number of l’s on the leaves under the searched node is similarly pro- babilistic. In a probabilistic game model (St, P) with a proba- bilistic search (3.1), we define the estimation of the minimax value IV (a random variable on fl) at any fixed node of the game tree as follows. Definition 3. For each i, 0 5 i 5 h, the conditional the minimux value A4 w.r.t. Fi, is called the i-th evaluation function Given an event E of level i, the value of pi is IMi(W) = P(M=l 1 E) for all w E E, which is just the conditional probability of a forced win, given E. From the increasing sequence (3.1), we know that the sequence Ch~i, F.} forms a martingale (Chung, 1974): A~i = E(~j 1 Fi) for 0 -2 i < j < il. In words, nli IS an average of A4i if i < j. From this pro- perty the following theorem IS derived (Tzeng and Pur- dom, 1982): 417 This theorem shows that given the estimations of two levels, the deeper one has all the information and, therefore, the other estimation can be dispensed with. IV DECISION MAKING Let MAX move from a node A with n different children B,, . . . , B,. Let T be the current game tree with A as the root. pose that the game is in a pro- P) with a search (3.1). Let minimax value at the node of level j (1 < j 2 h ), con- side the est’mation of M(‘) for each i-(1 < i < n): Mlif = E(Mb) 1 F). If a move is said to becorrect if and only if a node &ith minimax value 1 is chosen, then we have the following main result: Theorem 2. A decision making method that depends on the search. of a fixed level j, .which chooses the node with the largest estimation M/‘) (1 5 i 5 n), will be improved, if j is increased. Proof. Given the information of levels jr and j, (1 2 j, < j, 5 h), suppose that for an arbitrary fixed game in this model the node Bx-, is chosen relative to level j, and Bk, is chosen relative to level j,. Further- more, for this fixed game let ~j, Mb+,) = y1 M.@“) = y2* (%) = x1, My‘) = x 21 Fi!dm , T%eorem Then x1 2 y1 and’L2 < y2. 1 we have level j, is a forced win is always greater than or equal to the conditional probability that the chosen node rela- tive to level j, is a forced win. Since it is a sum of all such conditional probabilities, the probability that the chosen node relative to the level i, is a forced win is therefore greater than or equal to*t%e probability that the chosen node relative to the level j, is a forced win. If x2 < y2 for at least one game in this model, then the decision making of the deeper level is strictly improved. QED. Note that the estimation Afji) (1 < j < h) depends on the search of the whole tree T &stead of the subtree Ti under the node Bi. But if the searched moves are independent as in Pearl’s games, then each estimation depends on the corresponding subtree only (Tzeng and Purdom, 1982). V PRODUCT-PROPAGATION RULES Consider the backing up process on our example. Let A4 be the minimax value at the root, M, and M, the minimax values at the left son and the right son of the root, respectively. Given the searched event Eij of level 2, the estimation of M is P(M = 1 I E;j) = P(M = 1 1 CiEIXk = i, C:,5xk = j). If the root is a MIN node, then it can be proved that P(M = 1 I C&xk = i, Ctz5xk = j) = PWI = 1 1 c/f=19 = i)P(M, = 1 I J5iz5xk = j). (5.1) If the root is a MAX node, then the corresponding equation becomes P(M = 1 1 Ck4& = i, CfE5xk = j) = 1 - (5.2) (1 - P(M, = 1 1 C&Zk = i))(l - P(M, = 1 1 C&Q = j)). The values P(M, = 1 1 c;& = i) and Ppf, = 1 C&k I = j) are the estimates at the two chil- dren of t le root. The rules (5.1) and (5.2), called the product-propagation rules, hold because of the indepen- dence of the values at the leaves. This process also applies to general Pearl’s games for the search of each level. For more general independent cases, this result can be formalized and proved (Tzeng and Purdom, 1982). In Nau’s games, nodes on the same level are gen- erally dependent and thus these rules are not applica- ble. For the general dependent case, the evaluation function exists so far only theoretically. Practical methods of finding its values should depend on the dependence of the searched nodes and should be studied for each individual case. The product-propagation method is only for the independent case, and if it is applied to general games, some unexpected features like Nau’s pathology are also possible. CONCLUSIONS The research reported here illustrates the impor- tance of search uncertainty and search visibility in developing a realistic mathematical model of heuristic search in game trees. In the presence of uncertainty, minimaxing is not the optimum method to combine the values obtained from the search. REFERENCES [I] Chung, K. L., A Course in Probability Theory. New York, Academic Press 1974. (21 Knuth, D. E. and R. N. Moore, “An Analysis of Alpha-Beta Pruning,” Artificzal Intelligence 6 (1975) 293-326. [3] Nau, D. S., “Decision Quality as a Function of Search Depth of Game Tree.” Technical Report TR-866, Computer Science Department, University of Maryland, 1980. [4] Nau, D. S., “Pearl’s Game is Pathological.” Techn- ical Report TR-999, Computer Science Depart- ment, University of Maryland, January 1981. [5] Nau, D. S., “Pathology on Game Trees Revisited, and an Alternative to Minimaxing,” Artificial Intel- ligence 21 (1983) 221-244. Nilsson, N., Principles of Artificial Intelligence. Palo Alto, CA. Tioga Publishing Company, 1980. [7] Pearl, J., “Asymptotic Properties of Minimal trees and Game-Searching Procedures,” Artificial Intelli- gence 14 (1980) 113-138. [8] Pearl, J., “Heuristic Search Theory: Survey of Recent Results.” UCLA-ENG-CSL-8017, June 1980, Proc. 7th Int. Joint Conj. on AI, August 1981. PI Pearl, J., “On the Nature of Pathology in Game Searching.” UCLA-ENG-CSL-8217, January 1982. [IO] Tzeng, C.-H. and Purdom, P. W., “A Theory of the Heuristic Game Tree Search.” Technical Report No. 135, Department of Computer Science, Indiana University, December 1982. 419
1983
92
291
Diagnosing Circuits with State: An Inherently Underconstrained Problem Walter Hamscher Randall Davis Artificial Intelligence Laboratory Massachusetts lnstltute of Technology 545 Technology Square Cambridge, Mass 02139 “Hard problems” can be hard because they are computationally intractable. or because they are underconstrained. Here we describe candidate generation for digital devrces with state, a fault localization problem that is intractable when the devices are described at low levels of abstraction, and is underconstrained when described at higher levels of abstraction. Previous v;ork [l] has shown that a fault in a combinatorial digital circuit can be localized using a constraint-based representation of structure and behavior. ln this paper we (1) extend this represerltation to model a circuit with state by choosrng a time granularity and vocabulary of signals appropriate to that circuit; (2) demonstrate that the same candidate generation procedure that works for combinatorial circuits becomes indiscriminate when applied to a state circuit modeled in that extended representationL(3) show how the common technique of single- stepping can be viewed as a divide-and-conquer approach to overcoming that lack of constraint; and (4) illustrate how using structural de?ail can help to make the candidate generator discriminating once again, but only at great cost. Int reduction Faults in combinatorial digital circuits can be localized using a constraint-based representation of structure and behavior. This fault locatizatlon procedure. c;irldtdate generar~on. IS revtewed below. The procedure IS general and should apply to circuits with state: we have extended the constraint-based representation to include these devices. A key feature of the extended representation is the use of layers of temporal granularities. In this paper we show a simple example of such a multllayered descrtption. But. having extended the representation. we show that the same diagnostrc procedure that works weil for combinatonal circuits becomes Indiscriminate when applied to state circuits. lnturtlon tells us that circuits with state are more difficult to diagnose than combinatorial ones: we show that this intuition is correct by presenting a computatlonal view of the candidate generation process. Intuition also tells us that single-stepping a circuit is a good way to localize faults: this intuition too turns out to have firm computational grounds. Finally, we show that knowledge about the substructure of a device can provide considerable additional discriminatory power. This report describes research done at the Artificial Intelligence Laboratory of :he Massachusetts Institute of Technoiogy. Support for the Laboratory’s Al research on hardware troubleshooting is provided in part by a research grant supplied to MIT by the DigItal Equipment Corporation, and in pat: by !he Advanced Research Projects Agency of the Department of Defense under Cfflce of Naval Research contract NOOOl4-00-C-0505. Candidate Generation Given a device exhibiting faulty behavior. we wish to determine which of its subcomponents could be responsible for the misbehavior. We call these components c;lnd/dale.s. The most effective diagnoses are those which propose the fewest alternative candidates. in which the candidates represent the least complex hardware. and in which the candidates’ hypothesized mlsbehaviors are most specific. We represent each component of a circuit as a module [3]. Modules have substructure. composed of modules connected by wires. The primitive modules of the system are logic gates. The behavior of each module can be expressed as a constraint [-I, 51 on the values at Its terminals. The constraint is itself composed of a set of rules spanning the device. For example. the behavior of a two-input NAND-gate can be described as a constraint composed of the following rules: If both irlputs are 1, the output must be 0. If one input is 0, the output must be 1. If the output is 0. both inputs must be 1. If the output is 1 & one input is 1, other input is 0. For the sake of simplicity in this discussion, we assume that faults occur only in modules. This allows us to ignore some uninteresting details without affecting the essential nature of candidate generation. The process is best understood by considering a simple example. Figure 1 shows a combinatorial circuit that computes F = AC + BD and G = ED + CE. with inputs 3, 3. 1, 1, and 3. The modules’ behavioral constraints tell us that if all the modules are working, we can expect the outputs at F and G to be 6 and 6. Imagine. however, that we observe the outputs in the actual device to be 5 and 6. Which components could be responsible for this discrepancy? We find the potential candidates by tracing backward from the discre;oant output F. All modules that contnbuted to that output are potential candidates: in this case. MULT-1, MULT-2, and ADD.I. To find out which of those potential candidates can account for all the behavior observed. we consider each one in turn. suspending Its constraint [2], and asking whether the resulting.network is now consistent with the inputs and observations. The same procedure works under weaker assumptions, two points of failure we suspend pairs of constraints. e.g. 142 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. In this example we find that MULT-2 is not a consistent candidate. Suspending its constraint leads to a contradiction: the inputs at A-E and the observations at F and G are InconsIstent with correct behavior of the remaining four modules. MULT-I IS. however. a conslstent candrdate: it could have misbehaved by havirly 3 and 1 as Inputs and 2 as its output. ADD. I IS also a consrstent candidate: It could have had a 3 and a 3 as Inputs and a 5 as Its output. Having deduced these possible misbehaviors for ADD-I and MULT- 1. we can In ?ffect construct a new behavror for each candidate. This Iriforrnat:cn IS importclnt: it says not only cvneiher the module could be falling. but If 11 IS. ho~v It’s falling. Each new test then supplles addrtronal informatton about the module’s (mis)behavlor. in effect building up a truth table showing how the module must be (mis)bchaving. Representation of Circuits with State We have seen that the behavior of purely combinatorial devices can be modeled in a natural way using constraints. We model devices with state by extc?ndIng our representation to include constralnts that span time. To do this. we also need to extend our description of slgnats from single numbers to sequences of (val~c-.://~e) pairs. with each pair denoting the signal’s value at an Instant. The beh;ivior of a flipflop is then described as: If the clock input is 0 at time t- 1 and 1 at time t Then output Q at time f + I equals D at time t Else output Q at time t + I equals a at time t. Since a hierarchic representation of time is as useful as a hierarchic representation of structure. we describe the hehavror of modules using several different granularities of time. The most basic unit of time is the switching time of a gate. Other. coarser, units are less obvious and one of the difficulties we encounter lies in choosrng the appropriate levels of temporal abstraction. One of the secondary contributions of this paper is its attempt to define a number of levels that appear to be both useful and intuitive in the current domain. With a hierarchic representation of time. the behavior of a module can be described at several different levels of abstraction. For example, at the finest level of detail, a NOR gate can be modelled as having a unit delay. This would be a appropriate in an asynchronous feedback circuit, since the delay is important in understanding the behavior of the circuit. But a NOR gate in the combinatorial part of a properly designed clocked circuit can be modelled as having no delay; indeed, a “properly designed” clocked circuit is one in which the clock period IS longer than the maximum delay to quiescence of any combinatoriat component. Similarly, it is appropriate in some contexts to model a JK flip-flop as imposing a unit delay between its data inputs and its outputs: it may also be necessary to model its behavior at the gate-delay level. in which case the delay between the J and K Inputs and the outputs may be 4 or more units. We maintain alternative descriptions for the same type of device, as well as explicit mappings between those descriptions at different granularities. Different granularities of time also lead us to make use of symbolic values for signals in cases where transitions, rather than quiescent values, are important. For example, the clock input of a rising-edqe-triggered flipflop is described using the values 0 and + P, where + P denotes a rising edge followed by a falling edge. This abstraction allows us to describe the flipflop‘s behavior in part as: the clock input is +P at time 1, Then output a at time r + 7 is same as D at time t. A more complex example is shown in Fiyure 2. which shows a two bit register “TLIR” that clears itself whenever l’s are clocked into both Its flipflops. Figure 3 shows. at three different time granularities. TBR’S response to a series of changes on its inputs. We describe the behavior of this device at these multiple levels. to show how behavior at fine temporal granularities maps onto behavior at coarser granularities. Figure 3a shows the changes on TBR’S signals at the lowest time granularity, using delay of a gate as the basic unit (the small trek marks on the time axis). To represent behavior at this level. we use the flipflop behavior description given above and a description for the AND gate that imposes a unit delay between the Inputs and outputs: If both inputs are 1 at time t, output is 1 at time f t 1. If an input is 0 at t, the output must be 0 at t + 1. If the output is 1 at t, both inputs must be 1 at f-7. If the output is 0 at t and an input is 1 ai t-7, the other input is 0 at t- 1. At this lowest level. every transition on signals in TER is visible; this is a fairly contmuous view of the device’s behavior. Figure 3b shows the behavior at the next coarsest level, showing the values of all signals only at the instants of possible clock transitions (the large tick marks on the time axts). The values of DO, Dl. and Ql have been sampled at those points, producing the values at the next level. Here the basic unrt is the cycle time of the clock and the internal CLR signal has become mtislble. This yields the external behavior of the device, described as: If CLK is +-P at time t, Then QI at time t r 7 is AND(oi at t,NOT(oo at t)) Else QI at time t + 7 is ai at time 1. There are two key features to this mapping between levels. First, the lonrjest delay until quiescence at the finer level determines ho!v many fine-grain units correspond to one coarse-grain unrt. In this example, 5 units at the finer level maps onto a unit delay at the coarser level. because the behavior of the device normally requires at most 5 of the fine units to reach quiescence. Second, events whose duration is shorter than the current level of granularity are not represented. In this example the duration of the momentary “1” value of 01 after the second clock transitton is not represented at the coarser level. because It falls entirely L~lihln one unit of time at that level. and hence IS never “seen”. Figure 3c shows the behavior at the coarsest level of granularity. representing TBH’S external signals only at those Instants when the cloth makes a positive-going transition. 143 This hrdt?s the details of the CLK signal: we can see the changes that occur when thy clock makes Its transItIon. but have no Idea of how long It IS between transltlons. We represent this behavior as: QI at time t •f 7 is the AND of DI at I and NOT(DO at t)). Candidate Generation Applied to A Circuit With State We can combine the candidate generation technique reviewed above IwIth this reprcsentatlon of circuits with state as a first step in dragnosing those circuits. Consider for example a J-btt sequential multlplier MULT shown in Figure 4. MULT has P.-Jo Input registers A and B, and an B-bit accumulator register Q When the lrJ!T signal is high, the A and B inputs are loaded into the A and B registers and the a register IS cleared. On each clock pulse. the A register shifts down, the B register shifts up. and the Q register accumulates the product. After four clock cycles the Q register contains the product A ‘6. If we load the inputs 6 and 9 into the A and B registers on the first clock cycle, we expect to see 54 in register Q four clock cycles later. Suppose. however. we observe 58. We want to find out which components could have failed in such a way as to produce this symptom. To illustrate how constraint suspension can be used to find the consistent candidates in the circuit, we use a standard technique of replicating the multiplier over five clock cycles (as in Figure 5), producing a snapshot of the circuit behavior at each cycle. The ovals in the diagram represent signal values. Each signal is replicated five times in the diagram: each of these ovals represents the value of the signal at each of five clock pulses. The snapshots are linked by connections that suggest the transmission of register values from one time period to the next. The diagram shows that we expect the successive values of Q to be 0, 0. 18. 54. and 54. The CLK signal is implicit, just as in the layered temporal representation described above. The INIT signal is not shown for simplicity’s sake; it makes no contribution to the following analysis. Suppose we observe only the final contents of a, which is 58 instead of the expected 5-I. Tracing back from the expected value of Q. we find that all five components of MULT were supposed to contrlbute to that output. Thus all five components are potential candidates. To check whether these modules are consistent candidates under the single point of failure assumption, we suspend each of their behaviors in turn, by removing the constraint that each component imposes in all time sttces, and check to see whether its removal is consistent with the incorrect output. Doing this, we find that register A is not a consistent candidate, since there is no sequence of l’s and O’s that we can asstgn to the least significant bit of register A that could explain the result of 58. But all four of the other devices are consistent candidates. Worse. we are not able to deduce any speclflc misbehavior for them. The reason for this can be seen by looking at the constraint graph In Figure 5. If we know only the inputs at the top and the single output at the bottom then suspending the constraint for the drover or the adder &connects the graph and the inputs become irrelevant. Suspending the B register constraint leaves an almost-disconnected graph. In this case the values of the E? register at I? and 13 can be any pair that sums to 58. The candidate generator has become indiscriminate: four of five modules in the multipller are candidates. This is significant. because this example is not a pathological one, The problem is intrinsic to devices with state: hypothesizing a failure in a part means renlot’/rrg constrainfs in many time-.s//ces. This in turn tends to leave large gaps in which it is lmposslble to deduce what actually happened. Intuition tells us that circuits with state are hard to diagnose; this intuition now has firm computational ground: circuits with state are hard to diagnose in part because the problem is often inherently underconstrained. Introducing More Visibility lntultion also tells us that single-stepping state clrcults and observing the IntermedIate results vastly reduces the problem. LZle can now see why. Suppose we are able to cbserve the contents of Q at each of the five clock cycles. and we observe that It contains 0. 1. 20. 57 and 58 Instead of 0. 0. 18. 5-t and 5-1 3s expected. This provides two important sources of poj.ver. First. we have in each slice a strictly combmatorlal device. Since the subproblem of generatlng candidates in comblnatorlal circuits is typically sufficiently constrained. we expect to generate a more restricted set of candlaates in each slice. Second. we have four t/O pairs. HI effect four tests of the device. Since we are assuming a single point of failure. to be a candidate a component must be consistent with the observations in all four slices. This too will help to restrict the number of candidates generated. If we examine Figure 4, we find discrepancies at Q in the first through fourth time slices. In each slice we trace backwards from Q. yielding four sets of potential candidates. We intersect these sets to find the candidates consistent with the information in all four slices: the a register. A register, adder, and driver. The B register was eliminated from consideration because its misbehavior could not explain the discrepant output of Q at 12. Having determined the potential candidates by tracing back from discrepancies and enforcing consistency across time slices. we now determine which of these mcdules is consistent with the observations. l As before. register A is not a consistent candidate. (There is no set of assignments to its least significant bit over four time slices that yields the observed contents of Q ?vhen the B input is 9 and all other constraints are operating.) l The driver is a consistent candidate, its misbehavior can be partially described by the following truth table: CTL IN I OUT 0 I 1 (should be 0) 1 f8 1 19 (should be 18) 1 36 1 37 (should be36) 0 72 1 1 (should be 0) Table 1: Truth Table of Misbehaving Driver l The adder is a consistent candidate. (Note that removing its constraint in all four time slices completely disconnects each of the observed values from the inputs and frorn each other; for enabling the least significant bit ..- is a consistent candidate: if this AND gate’s output is always 1 no matter what its inputs, we get the observations of Table 1. this reason a faulty adder would explam arly observations.) It has the folIowIng rnlsbehavior: INPUT- 1 INPUT-2 IOIJ TPUT 0 0 I 1 (sllould be 0) 18 1 1 20 (should be 19) 36 20 1 57 (should be 56) 0 57 I 58 (should be 57) l The adder i=s composed of eight single-bit adder slices. (Figure 8). The least significant slice is a consistent candidate: its output. viewed as a 2-bit integer. is always 1 greater than it should be. The candidate set has now been reduced to only two modules: one AND gate in the driver and one bit-slice in the adder. Given the symptoms available. and excluding the Table 2: Truth Table of Misbehaving Adder l The Q register is a consistent candidate. As with the adder above. removing its constraint disconnects our observations from the inputs, so that this device’s failure could explain anything. Its truth table is: INPUTatf I OUTPUTatt+ 7 0 I ’ (should be 0) 19 1 20 (should be 19) 56 1 57 (should be 56) 57 1 58 (should be 57) Table 3: Truth Table of Misbehaving Q Register possibility of internal probes, it is not possible to distinguish between the two. This result illustrates the power of information about substructure in refining candidate generation. both in the number of candidates and in their complexity. The single point of failure assurnption and single-stepping of state circuits, while reducing the possible candidates considerably, were still not sufficient to reach a satisfactory diagnosis. As a fInal note of the power of thts approach. note that the flrlal candldate set was reached ~VC/I w/tl~vul ns,cu~~~~rq ~haf ttJt2 fnuit w;1s nO~J,/lter/,l,~te/Jt. The nonlnt?rrnlttency assiinijltlon says that the faulty module IS falling consistently, i.e. given the same Inputs. it produces the same (Incorrect) output. In our terms this amounts to insisting that the behrtvlors deduced for a candtdate be conststent across all If!e time slices. 1.e.. the tables IIke those shown In the previous sectlon have to be self-consistent. We were able to rule out many of the potential candidates using a weaker form of consistency Implied by the single point of failure assumption: we requrred only that sonle behavior of We have gained important information in the form of truth tables that describe how each candidate could have failed so as to produce the observed symptoms. Still. even with complete visibility of the outputs, under a strong set of assumptions. we are unable to distinguish among 3 of the 5 components of this device. We need yet more information. Hierarchic Diagnosis The only remaining source of Information is the substructure of the candidates. We can use this information by zpplytny the candIdate yeneratlon procedure to each of the remaining consistent candidates. Note that we take this step with some reluctance, from 3 pract1031 point of vfe1.v. using structural Information IS expensive because the number of potential and consrstent candIda& tends to increase dramatlcally. even though the complexity of the lndlvidual candidates decreases. l The a register is built from eight D-flipflops The tenlporal abstraction described and used here is sharing their clock and clear inputs (Figure 6). llmlted: short events are invlslble at higher levels of We use the behavior deduced for Q and map down abstraction yt’t often hnrdv/arc failures involve short events. from our coarse-grained temporal view to the next Consider for example a gate VJhIch Ii& fal!ed by slo!.ving level of ternporal detail. at which the clock signal down. rather thin f;lllln<j altogether Thrs :vIIJ cause Incorrect is visible. Applying candidate generation to 0. we results only ivhen this stovi’riezs c‘~uses some sqnal to be find that there is no single flipflop whose failure sampled too soon bsfor? It h 1 ‘s 3 chance to change to its could explain the observed misbehavior of Q. correct value. If this nllsbehavlor IS obstzrved at a coarse That is. there is no single flipflop whose failure temporal granularity. it ina; appear to be Intermittent. Any could produce the symptoms shown in Table level or kind of abstraction. in fact. falls prey to faults that It 3#(The discrepancies rn Q occurred in the three can represent. but not derive: a coarse gralned model of time low-order bits. Each of these discrepancies can represent hazards and races as rntermlttent faults. but it results in a set of potential candidates. intersection of these sets is null.) But the l The driver is composed of eight AND gates sharing a control input (Figure 7). Proceeding as above, we find that one AND gate --- the one each candidate be able to account for the dlscrepancles in all the tlrne slices. It might. for example. have been the case that the adder could be a candldate only if it added 0 and 0 to get 1 in time slice 1. and added 0 and 0 to get 0 in time slice 4. Even with this weaker form of consistency. we were able to constrain the candidates we generated simply because they could not account for the discrepancies in all four tirne slices under any behavior, intermittent or not. Limitations and Future Work can t distinquish betLS!een devices that have slowed down and ones that are genuIneI\ unpredictable. This fact puts a premium on careful deflnltlon of the m apptngs between layers of temporal granularities. On5 goal of this research is to further Investigate the nature of hierarchic diagnosis these temporal hierarchies in addition to structural ones. using 145 Conclusion Combinatorial circuits can be modeled in a natural way using constraints and this reprcsentatlon can be used for generating candidate components. Circuits with state can also be modeled by constrnlnts if the representation is extended to use mtrltlple levels of time granularity. Intuition tells us that clrcutts with sta., +Q are more dlfftcult to diagnose than combtnatorlal ones. and we have shovln a computational reason for this: when /es than complete state v~srb~trty is ava,/ab/e. candfdate generatIon IS /nherenrly underconstrained and therefore /ndrscr/minnfe. lntultlon also tells us that single- stepping a suspect state clrcult is a good way to localize faults: we showed that this intuition too turns out to have firm computatlonal grounds: single stepping allows us to view the problem as a more constrained problem. that of diagnosing a combinatorial circuit. FInally. by using InformatIon about devices’ internal structure and vlewmg devices at a fine temporal granularity, specific diagnoses can be obtained even for devices with state. Acknowledgments Howard Shrobe. Ramesh PatIt. Thomas Knqht. and all the members of the MIT Al Lab’s Hardware Troubleshooter group contributed to the content and presentation of this research. References VI [PI [31 PI 151 R. Davis. Diagnosis Via Ca.usal Reasoning: Paths of Interaction and the Locality Principle. In Proceed!ngb of AAAI-83, pages 88-94. AAAI, August, 1983. R. Davis. Reasoning from Structure and Behavior. 1984. To appear in ArtificiaI intelligence. R. Davis and H. Shrobe. Representing the Structure and Behavior of Hardware. /EEE Computer 16( lo):7582, October, 1983. DeKleer, J., and G. J. Sussman. Propagation of Constraints Applied to Circuit Synthesis. international Journal of Circuit Theory 8(2):127-144, April, 1980. G. L. Steele. The Definition and implementation of a Computer Programming Language Based on Constraints. Technical Report AI-TR-595, MIT, 1980. (3) A > MULP- 1 (3) B ADD-1 ---+ F (5) (I) C -’ MULT.2 -- (1) D ADD-2 ---Y G (6) (3) E ------+ MULT-3 Figure 1: Combina?orial Circuit Example / -- ----- __- .---- r----- -01 1 Figure 2: Self-clearing Two-bit Register Ql 7 0 0 1 Dl 0 1 1 0 ‘- DO 0 1 0 1 01 7 0 0 1 1 Dl 0 1 1 1 0 b. Do 0 1 0 0 1 CLK +P +P +P 0 +P a. Figure 3: a,b,c: Multilevel Timing Diagrams for Device TBR 146 Figure 4: Sequential Multiplier with 8-bit Result Register llriie -- -- __- t1 -- .~ - t2 -- - - t3 --- t4 --- t5 V -~- _ _ _ - --- _ -- I I- -- Figure 5: Multiplier Behavicr Viewed Over Five Clock Cycles 07 a3 a5 c4 03 02 01 cc Figure 6: Eight-Bit Q Register cur7 CUld GUI5 cu:.t GUI3 O” 12 out1 &IO Figure 7: Eigllt-Bit Driver Figure 8: Eight-Bit Adder 147
1984
1
292
NON-MONOTONIC REASONING USING DEMPSTER’S RULE Matthew L. Ginsberg Department of Computer Science St,anford University Stanford, California 94305 ABSTRACT Rich’s suggestion that the arcs of semantic nets be lahelletl so as to reflect confidence in the propertics they represent is investigated in greater detail. If these con- Gclcnccs are thought of as ranges of acceptable probabil- ities, esi$ting statistical methods can be used eflectively to combine them. The framework developed also seems to he a natural one in which to describe higher levels of deduction, such as “reasoning about reasoning”. I SEMANTIC NETS Rich [Rich, 19831 h as suggested labelling semantic nets [Quillian, 19681 with “certainty factors” indicating the deplh of convict,ion held in the properties they repre- sent. The non-monotonic rule “birds fly” would thus be represented not as birds fib flyers but as birds % flyers, (1) where the certainty factor of .95 indicates that 95% of birds 11~. Monotonic rules have certainty factors of 1, as in ostriches 7 non-flyers, (2) which is also written by Rich as ostriches 3 flyers. Shafer [Shafer, 19761 has argued that probabilities such as those above are better thought of not as specific values, but as ranges. It seems unreasonable to believe that ezncfly 95% of all birds fly-much better to believe that between 90% and 98% do. Instead of having the conditional probability p(flyer(z)lbird(z)) = .95, we take p(flyer(z)jbird(z)) E [.9, .98]. We will write such a probability range as a pair (c n), where c is the extent to which we believe a given propo- sition to be confirmed by the available evidence (-9 in the above example), and d is the extent to which it is dis- confirmed (1 - .98 = .02 above). We will also write (J for 1 - c, d for 1 - d, etc., so that the probability interval referred to in the last paragraph is [c, Z] in general. The beliefs (1) and (2) now become birds 2 flyers (.9 .02) and ostriches 3 flyers (0 1) respectively. Since we need c _< d; we will always have c + d 5 1, with equality only if the probability interval [c, C1] is in fact a single point. As c + d = 1 corresponds to complete knowledge of a probability, so c + d = 0 corre- sponds to the interval [0, l] and therefore to no knowledge at all. To perform simple reasoning using this rcpresenta- tion, suppose we have isa x (iZ) y and jsa y (2) z, (3) an d want to evaluate x 5 z. From the fact that the minimum probability of an x being a y is a, it follows that the minimum probability of an x being a z is at least UC. The probability of an x not being a z is at least ad for similar reasons. Thus the value of the arc x s z is (ac ud) and we have, for example, that Tweety z birds (1 0) -3 flyers (.9 .02) gives rise to the non-monotonic conclusion Tweety 5 flyers. (.Q .02) (4 126 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. II DEMPSTER’S RULE The difficulties with this scheme arise when differ- ing applications of the rule used in (3) lead to different conclusions. If we have Tweety (?;I ostriches %+ flyers, (0 1) we obtain Tweety 2 flyers, (0 1) in contradiction with (4). This situation is typical of non-monotonic reasoning. Default rules by their very nature admit exceptions; what we need is some way to combine conflicting conclusions such as (4) and (5). Dempster [Dcmpster 1968 or Shafer 19763 has dis- cussed this situation in depth, and our problem is in fact a special case of his investigations. If we denote by (a b) + (c d) the inference obtained by corn bining the two inferences (a 6) and (c d), Dempster’s rule gives us This formulation has the following attractive prop- a. It is commutative and associative. In many non- monotonic systems, the order in which non-monotonic (or other) inferences are drawn is critical, since the ap- plication of one rule may invalidate another. The commu- tativity and associativity of (6) guarantees that we will be able to overcome this difficulty. b. (a b) + (0 0) = (a b). The probability range (0 0) corresponds to no knowledge at all and will result from any attempt to apply an inapplicable rule. We might, for example, generate the arc Tweety z flyers from the (0 0) pair Tweety E elephants % flyers. (0 1) (0 1) The point here is that such an inference (should we draw it) will have no effect on our eventual conclusions. c. (a 0) + (c 0) = (a + c - UC 0). The probability ranges (a 0) and (c 0) each indicate no disbelief in the corrcspondin g arcs; in this case, the (independent) probabilities combine in the usila.1 fashion. d. For (c d) # (0 l), (1 0) + (c d) = (1 0); for (c d) f (1 0), (0 1) -t (c d) = (0 1). This result implies that no application of a non-monotonic rule can ever outweigh a logical certainty. There is no danger when applying a non-monotonic rule to obtain (4) that an eventual conclusion such as (5) will bc invalidated; the result of combining the two results is simply (5) again. This allows us to avoid the most computationally ditficult aspect of non-monotonic reasoning--that of determining when it is legitimate to apply a non-monotonic rule of inference. e. (0 1) + (1 0) is undefined. Such a combination indicates that an arc has been proven both valid and invalid and as such represents a conflict in the database. f. + is (nearly) invertible. If we denote the inverse by -, we have, for (c d) # (0 1) or (1 0), (a b) - (c d) = ( F$--e!;!;dd *TX). cd - bee - ud2 (7) This enables us to easily retract the conclusion of an ear- lier inference without influencing conclusions drawn using other means. III RULES AND METARULES A more efficient approach to non-monotonic deduc- tion is implied by McCarthy’s formulation [McCarthy, 19841: bird(x) A labnormall(x) --+ flies(x) ostrich(x) --+ abnormall(x) ostrich(x) A labnormal2(x) -+ Tflies(x). The effect of these rules is to have the fact that Tweety is an ostrich invalidate not the conclusion that Tweety can fly, but Ihe rule which led to that conclusion. In our formulation we want to deactivate not the arc Tweety % flyers (.9 .02) but the rule corresponding to birds *% flyers (.9 .02) (8) itself. In order to see how to do this, we need first t.o describe the rule (8) in greater detail. 127 We will think of a rule as a triple (cz c p) where a is a list of antecedents, c is a consequent, and p is a probability interval. The intention is that if all of the antecedents are satisfied, then the consequent holds with probability range p. An example will probably provide the best clarification: The rule “if z is a bird, then z can fly” will be represented as (((isa x birds)) (isa 5 flyers){.9 .OZ)). (9) The antecedent list consists of the single arc (isa x birds). The consequent is (isa x flyers), with confidence (.9 .02). The rule itself is activated at the same level as the antecedent (for multiple antecedents, a product should be used). Thus if the value of (isa x birds) is (CL b), the ensuing increment to (isa x flyers) will be (.9a .02a). Returning to the ostrich case, we have the rules (( (isa x ostriches)) (isa x flyers)(O 1)) (( (isa x ostriches)) (rule ((isa x birds)) ( isa 2 flyers) (.9 .02))(0 1)). PW The first of these simply repeats the rule that ostriches cannot fly. The second, however, deactivates the rule (9) itself. If the rule has been applied, the reversability of Dempster’s rule ensures that the conclusions will remain accurate; if the rule has not been applied, WC will be saved the work of doing so. In the example we have been considering, the cer- tainty of the rule that ostriches do not fly guarantees that we will reach the same conclusion whether or not we apply (9) t o an ostrich. But consider the following example: Newspaper articles are true. (.9 .05) (114 Articles in the National Enquirer are true. (.5 .4) W) If I read something in the Nat,ionnl Enquirer, both rules can be applied and I will believe the story to be true with probablity interval (.92 .07). Here we really do need a rule such as (lob) that ensures that (11~) will not be applied when (llb) can be. Better still would be the metarule, “Never apply a rule to a set when there is a corresponding rule which can bc applied to a subset.” WC could write this as (((isa x y) (rule (a (isa z x) b) c d)) (rule (u (isa z y) 6) c e)(O 1)). (12) As a special case, we have (( (isa Enquirer-article newspaper-article) (rule ((isa x Enquirer-article)) (accurate z) (.5 .4))) (rule ((isa x newspaper-article)) (accurate 2) (.9 .05)) (0 1)). (12’) Now suppose we read an article in the National Enquirer. Rules (11~) and (llb) are activated, with (116) activating the metarule (12) and therefore deactivating (11~). The article is now believed to be true with confidence (.5 .4). Equally important is what happens if we later read the same article in the New York Times. Now rule (11~) alone is applied and the article is believed to be true, corroborated to some extent by the Enquirer appearance. IV PROBABILITIES FOR RULES The power of the methods we have described poten- tially extends well beyond the examples we have given thus far. The best interpretation of a metarule such as (12), for example, is probably as a way to assign a proba- bility range to a rule itself. Thus in applying a rule with probability range (a b), we should weight its conclusion by 6 before updating any other probabilities, since b is the maximum extent to which the rule may be applicable. Implementation of this idea will require us to main- tain a list of rules which have been either used or acti- v&ted by other rules. There are thrne advantages to this. Firstly, it enables us to avoid reapplying a single rule without obtaining new information. Since Dcmpster’s rule assumes independence of the probability estimates being combined, multiple used of a single rule need to be avoided. Secondly, this approach enables us to purtiully de- activate a rule. Returning to our newspaper example, pcrlmps all wc should say is that the rule (11~) is not as 128 (rule ((isa 5 IlcwsI)aI)er-n~t,iclc)) (accurate x)( .9 .05)) (0 4. (13) If we use this rule instead of (1 I b)----note that (12’) now d i~~nppcars -- an article in tllc Natioin~l Enqllircr will be believed to be true with probability range (.54 .03). Bote also that the commutativity and invertability of Dempster’s rule mean that we need not apply (13) be- fore (1 la) in order to obtain this result--provided that we store the information that we Izave used (1 la), we will have no difficulty reversing the inference after a sub- sequent invocation of (13). A final (but currently unexplored) advantage of this approach is that it may allow us to focus the attention of the system. For a rule which has probability range (u b), we can think of u as the extent to which the rule is likely to be nscflrl. To focus attention on the fact that birds fly, we might have (((isa x birds)) (rule ((isa x birds)) ( isa x flyers) (.9 .02))( .5 0)). (14) (Such a rule will itself need a high level of activation to be of any use). If we are maintaining a list of rules and the levels with which they are expected to be useful, a rule such as (14) can be used to ensure, for any forward- chaining system, that the inference that any given bird can probably fly will be drawn early. More generally, we can translate, “n7hen considering an element of a group, think about properties which are unique to that group,” into the metarule (((isa x y) (isa 2 y)) (15) (rule ((isa x y)) ( isa x z) a) (-5 0)). If birds were the only flyers, so that flyers % birds had truth value (1 0)’ this would reproduce (14)) with y = birds and z = flyers. As it stands, the result of applying (15) to birds and flyers will be somewhat weaker. ‘l‘he assignnicnt of conlidcnrc rah:;es to arcs in se- mantic. nets sccn~s to solve sonic of’ the pi-oblcnls which would otjherwisc 11~: c~;countcrcrl ire dealing n-ill1 them. Non-monotonic infcrcnccs can be described easily, and mesh neatly with their monotonic counterparts. Further power can be obtainctl by allowing the rules thcmsclves to be treat t:d as arcs, both by including them within other rules and by assigning them probabilistic weights of their own. Reasoning about reasoning can be discussed, and attention can be focussed. This framework seems to be a promising one in which to describe general knowledge of the type WC have been examining. ACKNOWLEDGKMENTS The author would like to thank John McCarthy, Ben Grosof, Mike Gencsereth and Sally Greenhalgh for many stimulating and useful discussions. PI PI PI PI PI REFERENCES Dempster, A.P. “A Generalization of Bay&an Infer- ence.” J. Roy. Stat. Sot. B, 30 (1968)) 205--247 McCarthy, J. “Applications of Circumscription to Formalizing Common Sense Knowledge,” to appear (1984) Quillian, M.R. “Semantic Memory,” in Semantic In- formation Processing, ed. M. Minsky. Cambridge: MIT Press, 1968, pp. 216-270. Rich, E. “Default Reasoning as Likelihood Reason- ing.” In Proc. AAAI-83. Los Altos, California: William Kaufmann, Inc., 1983, pp. 348-351 Shafer, G. A Mathematical Theory of Evidence. Princeton: Princeton University Press, 1976
1984
10
293
A Theory of Action for MultiAgent Planning Michael Georgeff Artificial Intelligence Center SRI International 333 Ravenswood Ave. Menlo Park, California 94025. Abstract A theory of action suitable for reasoning about events in multiagent or dynamically changing environments is pre- scntcrl. A device called a process model is used to represent the observable behavior of an agent in performing an ac- tion. This model is more general than previous models of act ion, allowing sequencing, selection, nondeterminism, it- eration, and parallelism to be represented. It is shown how this model can be utilized in synthesizing plans and rea- soning about concurrency. In parbicular, conditions are de- rived for determining whether or not concurrent actions are free from mutual interference. It is also indicated how this theory pro!.ides a basis for understanding and reasoning about act,ion sentences in both natural and programming lariguagcs. 1. Introduction If intelligent agents are to act rationally, they need to be able to reason about the effects of their actions. Fur- thermore, if the environment is dynamic, or includes other agcmls, they need to reason about the interaction between their actions and events in the environment, and must be able to bynchronize their activities to achieve their goals. hlost previous work in action planning has assumed a :,iIlglc agrnt acting in a static world. In such cases, it is suf- ficicut to represent actions as state change operators (e.g., [-I], 191). llowcvcr, as in the study of the semantics of pro- gramming languages, the interpretation of actions as func- tions or relations breaks down when multiple actions can be performed concurrently. The problem is that, to reason about the effects of concurrent actions, we need to know hog t hc act ions are performed, not just their final effects. Some attempts have recently been made to provide a better underlying theory for actions. McDermott [lo] con- siders an action or event to be a set of sequences of states, and describes a temporal logic for reasoning about such ac- tions and events. Allen [l] also considers an action to be a set of sequences of states, and specifies an action by giving the relationships among the intervals over which the ac- tion’s condibions and effects are assumed to hold. However, while it is possible to state arbitrary properties of actions and events, it is not obvious how one could use these logics in synthesizing or verifying multiagent plans. ’ In a previous paper [5], we proposed a method for form- ing synchronized plans that allowed multiple agents to a- chieve mult.iplc goals, given a simple model of the manner in which the actions of one agent interact with those of other agents. In this paper, we propose a more general model of action, and show how it can be used in the synthesis or vcrificat ion of multiagent plans and concurrent programs. 2. Process Models and Actions Agents are machines or beings that act in a world. We distinguish between the internal workings of an agent and the external world that affects, and is affected by, that agent. All bhat can be observed is the external world. At any given instant, the world is in a particular world state, \vhich can be described by specifying conditions that, are true of that state. Let us assume that the world develops through time by undergoing discrete changes of state. Some of these changes are caused by agents acting in the world; others occur “nat- urally,” pc)rhaps as a result of previous state changes. Ac- tions and events arc considered to be composed of prim- iti\-e objects called atomic fmnsitions. An atomic transi- tion is a relation on the set of world states. Any sequence of states resulting from the application of some specified atomic transitions will be called an event. Note that we do not rcquirc that atomic transitions be deterministic, but we do require that t,hey terminate. An action is a class of events; viewed intuitively, those that result from the activity of some agent or agents in accomplishing some goal (including the achievement of de- sired conditions, the maint,enance of desired invariants, the prevention of other events, etc.) ‘Allen [2] proposes a method for forming multiagent plans that is based on his representation of actions. However, he does not use the tem- poral logic directly, and actions are restricted to a particularly simple form (e.g., they do not include conditionals). 121 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. Usually we do not have access to this internal structure. However, since we are interested only in the observable be- havior of the agent, we do not need to know the internal processes that govern the agent’s actions. Thus, to rea- son about how the agent acts in the world and how these actions interact with events in the world, we need only an abstract model that explains the observable behavior of the agent. We shall specify the class of possible and observable be- haviors of an agent when it performs an action by means of a device called a process model. A process model con- sists of a number of internal states called control points. At any moment in time, execution can be at any one of these control points. Associated with each control point is a cor- rectness condition that specifies the allowable states of the world at that control point. The manner in which the device performs an action is described by a partial function, called the proceaa control function, which, for a given control point and given atomic transition, determines the next control point. A process model can thus be viewed as a finite-state transition graph whose nodes are control points and whose arcs are labeled with atomic transitions. A process model for an action stands in the same re- lationship to the internal workings of an agent and events in the external world as a grammar for a natural language bears to the internal linguistic structures of a speaker and the language that is spoken. That is, it models the observ- able behavior of t.he agent, without our claiming that the agent actually possesses or uses such a model to generate behaviors. 3. Formal Definition A process model describes an action open to an agent. Formally, a process model is a seven-tuple A = (S, F, C, 6, f, q, CF) where l S is a set of world atatea 0 F : SXS is a set of atomic transitions 0 C is a set of control point8 l S : C X F -+ C is a process control function l P : C - P associates subsets of S with each control point; values of this function are called correctness conditions l CI E C is the initial control point l cF E C is the final control point. In general, 6 is a partial function. If for a control point c and atomic transition tr, (c, tr) is in the domain of 6, we say that tr is applicable at c. We are now in a position to define the execution of a process model. Let A be a process model as defined above. We first define a state of execution of A to be a pair (u, c), where * c E C and u E S’. We say that a state of execution 2S* is the set of all finite sequences over S. el = (usI, cl) directly generates a state of execution e2 = (us~.s~, c2), denoted el bA e2, if either I. 3tr . S(c,,tr) = c2 and (s~,~2) E tr, or 2. Cl = cg In (I) we say that the transition is effected by the agent ex- ecuting A, while in (2) we say that the transition is effected by the environment. We now define a restriction on the relation bA. If, for el and e2 defined above, el bA e2 and 92 E P(Q), we say that el successfully generates e2, denoted el =+A e2. If s2 e P( c2), execution is said to fail. Let =s> denote the reflexive transitive closure of the relation *.Q. Then the action generated by A, denoted a~, is defined to be aA = (6 1 (8, CI) =+> (b, CF) and 8 E P(Q)} Each element of Q# is called a behavior or act of A. The action o itself is the set of all behaviors resulting from the execution of A. Viewed intuitively, the device works as follows. If it is at control point cl and the world is in a state si satisfying the correctness condition P(Q), the device can pass to control point c2 and t.he world to state 52 as long as there exists an applicable atomic transition tr between states a1 and s2 and S(Cl) tr) = cg. Alternatively, the device can stay at control point cl and some transition or event occur in the world (perhaps resulting from the action of some other agent). In either case, for the execution to be successful (not to fail), the new world state must satisfy the correctness condition at c2, i.e., s2 must be an element of P(Q). In performing t’he action cr, the device starts at control point cf. The action terminates when the device reaches CF. Given an initial state of the world 8, various sequences of world states can be generated by the process model a8 it passes from the initial to the final control point. The set of all such sequences constitute the action itself. This is the same general view of action as presented by Allen [I] and McDermott [lo]. However, our theory differs in t,hat it allows us to distinguish between transitions effected by the agent and those effected by the external world. This is particularly important in the synthesis and verification of multiagent plans and concurrent programs (e-g-, PI)* Note that we do not require that a state satisfying the correctness condition at a control point be in the domain of some atomic transition applicable at that control point. Thus, it is possible for the agent to arrive at an intermediate control point and not to be able to immediately effect a further transition. In such cases, the environment must change before the action can progress. This could occur, for example, if an agent nailing two boards together expected another to help by holding the boards. Only when the “holder” (who is part of the environment) has provided the necessary assistance (and moved the state of the world into 122 the domain of an applicable transition) can the “nailer” proceed with the action. Neither do we require that an atomic transition per- formed by an agent always be successful i.e., the transition could sometimes leave the agent in a state that violated the current correctness condition. A process model that al- lowed such transitions could sometimes fail. In most cases, this is undesirable (though it may be unavoidable), and for the rest of the paper we will assume that this cannot hap- pen. That is, we will assume that only the environment (or another agent) can cause an action to fail. It should also be noted that the correctness conditions say nothing about termination - it may be that an action never reaches completion. This can be the case if the action is waiting for a condition to be satisfied by the environment (so that a transition can be effected), it loops forever, or the environment is unfair (i.e., does not give the action a chance to execute). In many cases, we wish to model actions that proceed at an undetermined rate and fail if they are ever forced to suspend execution. For example, it is difficult to hit a golf ball if the environment is allowed to remove and replace the ball at arbitrary times during one’s swing. Such uninter- ruptable actions require that, for any control point c, any state that satisfies the correctness condition at c also be in the domain of some atomic transition applicable at c. 4. Composition of Actions A plan or program for an agent is a syntactic object consisting of primitive operations combined by construc- tions that represent sequencing, nondeterministic choice, iteration, forks and joins, etc. If we intend the denotations of such plans to be process models, we need some means of combining the latter in a way that reflects the composition operators in plans. Of special int,erest, and indeed the motivation behind the model presented here, is the parallel-composition oper- at,or. We define this below. Let Al = (S, FI : C1, &, PI, cfl, cF1) and A2 = (S, F21 G, 6211’2, c12, cF2) be two process models for actions o1 and a=, respectively. Then we define a process model representing the parallel composition of Al and AZ, denoted Al 11 A*, to be the process model (S, F, C, 6, P, CI, CF), where l F = FI u F2 l c = Cl ⌧ c, l For all cl E C1, cz E C2 and tr in F1, ~((c1,c2)Jr) = (~&lr~~),C2) l For all cl E C,, c2 E C2 and tr in F’, ~((cI, CZ), tr) = (h 62(c2, tr)) l For all cl E C, and c2 E C2, pk, c2)) = Pl(cl) n p2(c2) It is not difficult to show that the action a generated by A, 11 AZ is exactly (1: n y 1 I E CQ and y E ~2). Note that the projection of S onto CI and C2 gives ex- actly the control function for the component process mod- els. At any moment, each component is at one of its own control points; the pair of control points, taken together, represents the current control point of the parallel process. Furthermore, the behaviors generated by these two pro- cesses running in parallel are also generated by each of them running separately. This means that any property of the behaviors of the independent processes can be used to de- termine the effect of the actions running in parallel. This is particularly important in providing a compositional logic for reasoning about such actions (see [3]). The above model of parallel execution is an interleav- ing model. Such a model is adequate for representing al- most all concurrent systems. The reason is that, in almost all cases, it is possible to decompose actions into more and more atomic t,ransitions until the interleaving of transitions models the system’s concurrency accurately. The nondeter- ministic form of the interleaving means that we make no as- sumption about the relative speeds of the actions. We can also define a parallel composition operator that is based on communication models of parallel action, in which commu- nication acts are allowed to t,ake place simultaneously. This, together wit,h other composition operators, is described by me elsewhere [6]. 5. Freedom from Interference In plan synthesis and verification it is important to be able to determine whether or not concurrent actions inter- fere with one another. In the previous section we defined what it meant for two actions (strictly speaking, process models) to run in parallel. Now we have to determine whether execution of such a parallel process model could fail because of interaction between the two component pro- cesses. Consider, then, two actions a and p generated by pro- cess models A and 8, respectively. The process model cor- responding to these actions being performed in parallel is A /) 8. In analysing t,his model, however, we will view it in terms of its two component process models (i.e., A and 8). Assume that we are at control points cl in A and c2 in 8, and that tr is an atomic transition applicable at ~2. Clearly, if the process has not failed, the current world state must satisfy both P(cl) and P(c2). Now assume that process B continues by executing the atomic transition tr. This transition will take us to a new world state, while leaving us at the same control point within A. From A’s point of view; this new state must still satisfy the condition P(cl). Thus, we can conclude that the transition tr executed at control point c2 will not cause A to fail at cl if the following condition holds: l CI = (01, CI2) l CF = (cFl~~F2) V’s1 S? . s1 E P(cl)nP(c2) and (Q,s~) E tr implies 82 E f(q) We say that the transition tr at control point ~2 does not , interfere with A if the above condition holds at all control points in A, i.e., for all correctness conditions associated with A. We are now in a position to define freedom from in- terference. A set of process models Al,. . . A, is said to be interference-free s if the following holds for each process Ai: for all control points c in A; and all transitions tr applicable at c and for all j, j # i, tr at c does not interfere with Aj. Thus, if some set of actions is interference-free, none can be caused to fail because of interaction with the others. Of course, any of the actions could fail as a result of interaction with the environment. From this it follows that, for ascertaining freedom from interference, it is sufficient to represent the functioning of a device by 1. A set of correct,ness conditions, and 2. A set of atomic transitions restricted to the correct- ness condition of the node from which they exit. Knowledge of a process model’s structure (i.e., the pro- cess control function), is unnecessary for this purpose. In a distributed system, this means that an agent need only make known the foregoing information to enable it to in- teract safely with other agents. We call such information a reduced specification of the action. Let us consider the following example. Blocks A, B and C are currently on the floor. We wish to get blocks A and B on a table, and block C on a shelf, and have two agents, X and Y, for achieving this goal. Agent X has not got access to block B, but can place block A on the table and block C on the shelf. He therefore forms a plan for doing so. Agent Y cannot reach block A, but is happy to help with block B. Unfortunately, in doing so, he insists that the floor be clear of block C at the completion of his action. The plans for agent X and Y are given below. The correctness conditions at each control point in the plans are shown in braces, “{” and “}“. The “if” statement is assumed to be realized by two atomic transitions. The first of these is applicable when block C is on the floor, and results in block C being placed on the table, The second is applicable when block C is not on the floor, and does nothing (i.e., is a no-op). The process models corresponding to these plans should be self-evident. Plan for agent X: {(clear A) and (clear C)} (puton A TABLE) {(on A TABLE) and (clear C)} (puton C SHELF) {(on A TABLE) and (on C SHELF)} ‘This definition of the notion “interference-free” generalizes to arbi- trary transitions that used by Owicki and Gries[llj for verifying con- current programs. Synchronization primitives have not been included explicitly, but can be handled by conditional atomic transitions IS]. Plan for agent Y: {(clear B) and (clear C)} (puton B TABLE) {(on B TABLE) and (clear C)} if (on C FLOOR) then (puton C TABLE) {(on B TABLE) and not (on C FLOOR)} It is clear from the definition given above that these ac- tions are interference-free. However, they interact in quite a complex manner. In some circumstances, agent Y will put block C on the table, which would seem to suggest interfer- ence. Nevert,heless, interference freedom is assured because the only time that Y can do this is when it does not matter, i.e., before X has attempted to put C on the shelf. Note that if the test and action parts of the “if’ statement were separate atomic transitions, rat,her than a single one, then the actions would not be free from interference. 6. General Reasoning about Actions So far we have been interested solely in reasoning about possible interference among actions. For many applica- tions, we may wish to reason more generally about actions. One way to do this is to construct, a logic suitable for rea- soning about process models and t’he behaviors they gener- ate. That is, we let process models serve as interpretat,ions for plans or programs in the logic. An interesting compo- sitional temporal logic has been developed by Barringer et al [3]. Because it is compositZional, process models provide a natural interpretation for the logic. One may well ask what role process models play, given that the only observables are sequences of world states and that a suitable temporal logic, per se, is adequate for de- scribing such sequences. However, in planning to achieve some goal, or synthesizing a program, we are required to do more than just describe an action in an arbitrary way - we must. somehow form an object that allows us to choose our nest action (or atomic t#ransition) purely on the basis of the current execution state, without any need for further reasoning. We could do this by producing a temporal assertion about the action from which, at any moment of time, we could directly ascertain the next operation to perform (e.g., a formula consisting of appropriat,ely nested “next” opera- tors). Thus, in a pure t,emporal logic formalism, plan syn- thesis would require finding an approriately structured tem- poral formula from which it was possible to deduce satis- faction of the plan specificabion. However, instead of view- ing planning syntactically (i.e., as finding temporal formu- las with certain structural properties), it is preferable, and more intuitive, to have a model (such as a process model) that explicitly represents the denotation of a plan or pro- gram (see [6]). Process models serve other purposes also. For example, interference freedom is easily determined, given a process model, but it is less clear how this could be achieved ef- 124 ficiently, given a general specification in a temporal logic. Even so, one would need to construct an appropriate pro- cess model first (or its syntactic equivalent in a temporal logic), as the implementation of the specifications might make it necessary to place additional constraints upon the plan. In combination with a temporal logic such as suggested above, the proposed theory of action provides a semantic basis for commonsense reasoning and natural-language un- derstanding. Process models are more general than previ- ously proposed models (e.g., [7]), particularly in the way they allow parallel composition. They can represent most actions describable in English, including those that are prob- lematic when a&ions are viewed as simple state-change op erators, such as “walking to the store while juggling three balls” [I], “running around a track three times” [lo], or “balancing a ball” (which requires a very complex process model despite the apparent simplicity of its temporal spec- ification). The theory also allows one to make sense of such notions as “sameness” of actions, incomplete actions (like an interrupted painting of a picture) and other important issues in natural-language understanding and commonsense reasoning. Process models are also suitable for representing most programming constructs, including sequencing, nondeter- ministic choice (including conditionals) and iteration. Par- allelism can also be represented, using either an interleaving model, as described in section 4, or a communication model. The model used by Owicki and Gries [ 1 l] to describe the se- mantics of concurrent programs can be considered a special case of that proposed herein. 7. Conclusions A nascent theory of action suitable for reasoning about interaction in multiagent or dynamically changing environ- ments has been presented. More general than previous the- ories of action, this theory provides a semantics for action statements in both natural and programming languages. The theory is based on a device called a process model, which is used to represent the observable behavior of an agent in performing an action. It was shown how this model can be utilized for reasoning about multiagent plans and concurrent programs. In particular, a parallel-composition operator was defined, and conditions for determining free- dom from interference for concurrent actions were derived. The use of process models as interpretations of temporal logics suitable for reasoning about plans and programs was also indicated. PI PI PI PI REFERENCES Allen, J. F., “A General Model of Action and Time,” Comp Sci Report TR 97, University of Rochester (1981). Allen, J.F., “Maintaining Knowledge about Tempo- ral Intervals,” Comm. ACM, Vol 26, pp. 832-843 (1983). Barringer, II., Kuiper, R., and Pnueli, A., “Now You May Compose Temporal Logic Specifications” (1984). Fikes, R.E., Hart, P.E., and Nilsson, N.J., “STRIPS: A New Approach to the Application of Theorem Proving to Problcrn Solving,” nttificial Znteiligence, Vol 2, pp. 189-208 (1971). Georgeff, M.P., “Communication and Interaction in Multiagent Planning,” hoc. AAAI-88, pp. 125-129 (1983). Georgcff, M.P., “A Theory of Plans and Actions,” SRI AIC Technical Report, Menlo Park, California (1984). Hcndris, G.G., “12lodeling Simultaneous Actions and Continuous Processes,” Artificial Intelligence, Vol 4 (1973). Lamport, L., :tnd Schneider, F.B., “The “Hoare Logic” of CSP, and All That”, ACM Transactions on Pro- gramming Languages, Vol 6, pp 281-296 (1984). hlcCart hy, J., “Programs with Common Sense,” in Semnntic Znformntiorl !‘rocessing M. Minsky ed. (MIT Press, Canhridge, hlassachusetts) (1968). hlcl)erniott, D., “A l’emporal Logic for Reasoning about Plans and I’rocesscs,“ Comp. Sci. Research Report 196, Yale University (1981). Owic-ki, S. and Cries, D., “Verifying Properties of Parallel Programs: An Axiomatic Approach,” Comm. ACM, Vol 19, pp 279-28.~ (1976). 125
1984
11
294
QUALITATIVE REASONING WITH HIGHER-ORDER DERIVATIYES Johan dc Klccr and IIanicl G. IIobrow lntclligcnt Systems I.aboratory XJ’ROX Palo Alto Rcscarch Center 3333 Coycjtc I Iill I<oild Pal0 Al to, Cillif0l~llia 94304 Considcriiblc progress has been made in qualit‘ltivc rcnsoning ;ibout phycic;rl systems (dc Klccr and Ill-own, 1984) (dc Klccr and I~II)\+ II. 1953,) (J’~rbu~. 1982) (I IilycC, 1979) (Kuipcrs. lS)SZa) (Williams. 1984iI) (Williams, 1984b). IIcscription. cxJ>l;lnation ilnd prediction of cvcnts which occur over short time intcrv,lls is filirly well understood. I lOWC\Cr-, WhCll CnOLl@ time PilSSCS thC lillldillllC~ltill lllOdC Of bC- h;lvi()r of the dckicc may change. IIiscovcring lilWS which govern this gross scale time bchuvior IlilS proven illusive. At first sight it ilppcars that the inhcrcnt iimbiguity of qualitative analysis makes it impossible to formulntc pnwcrful Jaws. ‘I’his is not the cast, and wc h,ivc idcnlificd six fundanlcntal IilWS which govern tIlc gross-time behavior of a dcvicc nAic+ rely otl qualiltrtive irtforttlnlion alotte. WC ~;IVC built iI computer progrilm bilscd 011 thcsc laws and tcstcd it out on many cxaniplcs. FOr pLlrJ>OSCS Of CXJ?~illldtiO~l. WC drilW all our cxnmplcs from a simple fluid-mechanical prcssurc-regulator illustrntcd in Figure 1. (dc Klccr and Brown, 1982) prcscnts a theory of causal analysis aJ>plicablc for small time Scales using the prcssurc rcguJ:Hor as an cxamplc. Using that theory it is possible to dctcrminc the direction of change for iIll dcvicc quantities. causal cxplnnations for their change, ilnd idcntificiltion of the ncgativc fccdbnck. In this paper WC address prcssurc regulator cvcnts occurring over a longer time scale. If the input prcssurc rises indcfinitcly, will the valve cvcntually complctcly close? Dots the valve oscillate when a sudden input is applied? Figure 1 : t’rcssurc Regulator Quiilit;ltivc cillctlltls uscs ;lltcrn;\ting quillitativc v<~lL~cs consisting of i1ItCWillS SCJTKltYltCd by points (flWl1 (Williams, 1934a) (Williams, 1984b)). ‘I’hc points arc landmark vnlucs whcrc tmnsitions of intcrcst occur. ‘l’llc qlMlitiltitC valLlC of 5 is dCllOtCd [z]. OllC quantity SJXlCC (t:orbus, 1982) of particulnr intcrcst is [z] = + iff z > 0. [z] = 0 iff z = 0, and [z] = - iff z < 0. (Notice thilt to 1liIvC ;I lilndmiIrk Of k WC use the qualitntivc space of [z - k].) ~J‘hc bchilvior of dcvicc cotnponcnts is dcscribcd by qualitative C~UilliOllS. Arithmetic is stl’;lightf~)rw~rrd. cxccpt for the cast of addition of opposite signs when the result is ambiguous. ‘I’hc qLl~~lllitiltiVC law hilt flow through 3 constriction is proportional to the prcssurc across it (i.c., F = kP) is rcprcscntcd qualiLltivcJy as [F] = [PI. k drops out as it is alwavs positive. ‘I‘hc val\c of the prcssurc regulator (Figure 1) has three opcmting regions each charactcri;rcd by diffcrcnt equations. When lhc valve closes, the qu;llitativc equation [F] -= [P] no longer holds as the nrca a\ilil;\blc for flow ([A] = 0) and flow is zero ([F] = 0) no matter what the prcssurc across the vaivc ([PI). ,Innlogously, when the valve is cotnplctcly open thcrc is no longer any restriction to fluid flow (F] but the prcssurc across the valve is zero ([PI = 0). Ot’EN:[A = A,,,,], [P] = 0 WORKING:[o < A < A,,,], [F] -= [I=] CLOSED:[A = 01, [Q] = 0 88 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. ‘l‘hc other important component of the prcssurc-regulator is the sensor which measures the output pressure to set the size of the valve opening. The sensor acts by converting the output prcssurc to a force with diaphragm F,, and pushing down on the spring. So incrcascd pressure causes decreased valve arca. The diaphragm moves to a position such that spring force (ks) bahnccs the force cxcrtcd by the prcssurc on the diaphragm (PA,I,,,I,,,,,,II,,,). Thus. the distance the spring comprcsscs (5) is proportional to the prcssurc (P). ‘the arca obstructed by the valve A irI,s,rrrl Ir,l is proportional to Z. ‘I’hc arcn available for flow (A) is A,,,, - Al,b~,rrr,.,r,~r thus qualitatively [P] = [A,,,,,., - A] = [A,,,I,J] - [A] = [-+I -- [A]. (WC usually USC -+- as a qualitative Filluc, but in ambiguous contexts such as this equation WC USC “[ j-1” instead.) ‘l‘hc abobc models dctcrminc the relationships bctwccn qualitative \ ~IUCS of the prcssurcs and flows of the ~I‘CWIIT regulator. WC arc intcrcstcd in developing a model to predict how a change in any one of thcsc quantities cilllscs changes in the others. t:or this WC need to dcfinc a qu,rlit;htivc dcriv;\livc of ;I quc\ntify. Just ;IS WC write [:c] Ibr the qualitative vnluc of z we write [$] for the qunlitativc KIILIC of 2: or ilbbKViillCd. az. In the WORKING mode of the pressure rcguIi)tor, the qualitative diff‘crcntial equations for ~hc V:IIVC and sensor arc aF = aP -+ aA and aA =-T -aPot‘,. ‘I’hcsc equations arc both dcrivcd from the quantitatiic equations rcl,lting thcsc varinblcs. ‘I’hc form of the quantitative equation is P’ = AD, thlls dF - = p+ $$,P > 0. dt AS A and P arc always positive. this rcduccs to the simpler qualitative equation aF = aA{- aP. Notice that if WC tried to dcrivc the qualitative relation of the dcrivativcs from the previously given qualitative equation rclnting flow and prcssurc ([F’] :- [PI) i\nd difTcrcntiatcd, WC WOUICI get aF = aP which is incorrect. By definition of prcssurc aP = LIP,,, - ap,,,,. Presumably the prcssurc-regulator dclivcrs an output to ;I load which demands more flow as prcssurc incrcascs, since i3PoTL1 = aF. Using thcsc it is possible to dctcrminc the quahtativc response to an input prcssurc rise aP,, = +: apt,, = +. apout = +,aP=+,aF=+,aA= -. This sol;ltion indicates that although the output prcssurc rises, the drop across the valve incrcascs and the arca available for flow dccrcascs thcrcby reducing the amount of the output rise. I-Iowcvcr, the qualitative solution ncithcr addrcsscs how the prcssurc-rcgula?tor achicvcs this behavior nor its gross time behavior such as whcthcr it completely closes, opens or oscillates. See (de Klccr and Brown, 1984) for a discussion of causal explanation; hcrc WC provide a framework for reasoning about its gross time behavior. SI M ULA’I’ION arca available for flow continues to drop. Conversely, if the prcssurc drops, the arca incrcascs. If enough time passes the valve may complctcly CLOSE (aA = - causes A to reach threshold 0) or OPEN (aA = + causes A to reach threshold A ,,,,,,), the qualitative equations change and hcncc the behavior change. ‘I’hc basic simulation loop analyxcs the behavior over time as follo:vs: (1) Start with some initial StiltC. (2) Solve for the qualitative ChilllgCS in each quantity. (3) Identify those quantities which arc moving to lhcir rhrcsholds. (4) Construct a set of the possible next states from thcsc transi- tions. (5) I:or C,lCll IlCXt StiltC IlOt YCt illlilly7Cd, rcciirsi\cly g0 to (2). This nondctcl-ministic simulation ;llgorithm idcntifics all of the states rc,lchi~blc ~~-OITI L~C initial state ;Ind iIll possibic transitions bctwccn them. ‘Ik device Ciln be in Cach state for an interval of time. So the time-lint of thu dcvicc is a simply a scqucncc of intervals, CilCh associated with SOITIC state. Step 4 is Cxpandcd into the followings gcncratc and test scqucncc. (4il) Construct iI pilrti;ll description of succeeding (continuous) sIi)tcs from threshold information (Rule 0): the plausible next StiltCS arc ~CIlCKltCd using tllC qLLllitiltiVC illtC_SlYltioll cquntion fOl* each significiint’ device quantity [z,,,,~] = [z~,,, rr.r,l] + az,,,,,,,!. (4b) Gcncratc ~~o~l~o~~tri~di~to~~y sliltcs (I<u~c I) which m;ltCh the pAal descriptions gcncratcd in step 4a. (4~) Cheek 311 transitions from the current stdtC to potential successors using rules 2 through 6. WC summari/.c the rules for gcncration and testing hcrc and explain ilnd cxcmplify each more cxtcnsively aftcrwnrds. 0 Rule (0) V~l/UC L’millUilJ: VillUCS IlllISt Ch:lllgC continuously over a transition. 0 Rule (1) C’orrlrtrtficliort uvuidntiw. ‘I’hc system CilllllOt transition to a state which is inconsistent with rcspcct to the qUillit;ltiVC equations. l I<ulc (2) Itlstnttt change ruke. Changes from zero happen inst;rntancously and IIO other chilngcs ~a11 hilppcn at an instnnl. l Rule (3) Dcrivafive cotr/ittui/y. Rule 0 also applies to dcrivativcs. 0 RUlC (4) Dcrivnrive itlSlOtl1 ChCltlgP W/P. RlllC 2 ills0 ilppliCS t0 dcrivntivcs. l Rule (5) Hi$ercor&r deriva/ives. Rules 0 and 2 apply to all orders of dcrivativcs. a liulc (6) Chatlge lo all zero &rivn/ives i&y itttpossiblc. A quantity which is non-zero at some instant cannot cvcr bccomc identically zero. WC list out rules 3 through 5 scparatcly from 0 and 2 to give cxamplcs of diffcrcnt lcvcls of analysis. RUI,P:(O) : \‘AI,UE: CONIINUI’I’Y WC define continuity for qualitative variables. A change is continuous if the value goes from an interval to its bounding point(s), or from a point to one of its two neighboring intervals. In the quantity As the input pressure rises, the output continues to rise and the IA significant variable. quantity dciincs component operating or is an indcpcndcnt state 87 space used hcrc the continuous changes arc bctwcen 0 and + or - (in cithcr direction), but not bctwcen + and -. The continuity rule is: no quantity may change discontinuously in any transition bctwccn states. RULE:(l): C:ON’I’RhIXTlON AVOII)ANCE Step 4 of the siiniilatio1~ algorithm gives il partial description of potential next stdtcs. ‘l’hc qualitative equations dctcrminc the values of the rcmC1inilig quantities. In many cnscs lhcre arc no possible values which arc consistent with both the qL1alitativc equations and the partial description gc1lcri1tcd by stclj 4. ‘I’his climinatcs poLcntial transitions gcncratcd in step 4n. ‘l’his often avoids having to dccidc which possible transition occurs first ((Williams, 1984a) LISCS transition-ordering in his analysis instead). WC Ciltl LISC this rule to prove tllilt tllc valve can’t CIOSC, i.e., cvcn though the arca availi1blc for flow is dccrcnsing it will ncvcr KilCll /cro. If the valve is closed tllc11 [A] = 0; then by the VLIIVC cquntion [r] = 0. ‘HIC load is passive [F] == [P,,,,t] SO [PC,,,/] = 0. Substituting into the sensor equation [A] = -[PC,“,] $- [-I-] WC get [0] = -[O] $- [+] a contradiction. In the prcssurc regulator it is possible to argue that the valve cannot close in the following way. Every incrcmcnt in input prcssurc causes sm,1llcr and s1nallcr dccrcments in valve WX; thcrcforc the arca npproaches zero asynlptotici1lly (i.c., bccomcs arbitrarily close to XTO but ncvcr rcachcs ;Icro). ‘l’his asymptotic nrgumcnt is unncccssary if one sees that the closed state is inconsistent. Thus, contradiction avoidance substitutes for. iIll sorts of sophisticated reasoning. An altcrnativc to the simulation algorithm. the envisioning algo- rithm provides a computation~llly tnorc clcgant mctlwd of climinnting transitions to inconsistent states. As a precursor to the loop, the envisioning algorithm idcntifics all nossiblc legal dcvicc states. ‘I’hcn step 4 only considers transitions to legal dcvicc states. Another ad- vantage to gcncrating all states is that when all legal transitions have been idcntificd, one can easily notice unrcachablc sets of states and orphaned singlets. QUAI,l’I’A’I‘I\‘P: AMI~IGUI’I’Y ‘l’hc power of the remaining rules arc illustrated by cxnmining the diaphragm-spring-stc1n fragment of the pressure regulator. If the input prcssurc incrcascs. the output prcssurc incrcasc, producing a force on the diaphragm. This force acts against the spring force and friction. The valve slowly gains velocity as it closes; howcvcr, by the time it rcachcs the position whcrc the force cxcrtcd by the prcssurc balances the restoring force of the spring, the valve has built up a momentum causing it to move past its equilibrium position, thus reducing the prcssurc below what it should bc. As it has overshot its equilibrium the spring pushes it back: but by the same reasoning the valve overshoots again, thcrcby producing ringing or oscillation. Figure 2 illustrates the csscntial details: a mass situated on a spring and shock absorber (i.e., friction). x=0 Figure 2 : Mass-Spring-Friction System ‘I’hc behavior of the 1nass is dcscribcd qunlitativcly [F] = 8~. Hooke’s I.aw by Newton’s I,aw for the spring F F = ma or = -kz bccvmcs 3F = --[VI. ‘I’hc resistance of the shock absorber is mod&d by [F] = --[VI and i3F = -8~. For simplicity sake. dcfinc z = 0 as the m;~ss position with the spring at equilibrium, and z > 0 to bc to the right. ‘I’hc net force on the mass i? provided by the spring and shock absorber: F,,,,lhS = J’,pr,7,y -t- F,r,l.l,,,7, or qualitatively [Km,] = [Fsprrng] + iFI rtcfton]. ‘I’his system of four qualitative cqL1;1tions hiIs thirteen possible solutions (intcrprctations) (XC ‘I’ablc 1). 12 345 6 1 8 [F,,,,,,,]- 0 - - - 0 + + + [F,r,<.hL>,,] ==o - o++ + + + [F.,,,,,‘,] = 0 - - - - - 0 + [“I- 0 + o-- - - - I‘,lblc 1 : Sdut~orls to hIas-Spnng F.qwtions 12 345 6 7 8 1 123 1 1 1 123 123 123 aF ,,,R,, = 0 +o- + + + +o- +o- +o- aFjr,,,,,, =o + ++o - - - aF,,,r,nq = 0 - o+++ + + au = 0 - --0 + + + I‘,lblc 2 : State Splitting By Dcrivativcs 9 10 11 12 ++o - o-- - +++ + o++ + 9 10 11 12 1 1 1 123 - - - +o- --0 + I)-- - ++o - 13 123 +o- In many casts of qualitati is correct, the remaining arc ve reasoning, one of the interprctntions thcorctically possible but unintcndcd modes of operation (dc Klccr, 1984). Howcvcr, the mass-spring system oscillates by tnoving bctwccn thcsc in tcrprctations. Movcmcnt bctwccn intcrprctations is govcrncd by the dcrivativcs of the quantities which arc dctcrlnincd by the equations: dv = [F,,.,], t3Fsprr,zy = -[VI, aFfrlCtlorL = ---au = -[F7,L,155], d aF,,,,,, = aFfrlctton + aFsprt,rg = [F,,,,,,] - [v]. ‘I’ablc 2 gives the values of the derivatives. Note that the dcrivntivcs themsclvcs arc somctimcs ambiguous. ‘t’ablc 2 illustrates how much work WC get from the contradiction avoidance rule. For cxamplc, state 6’s derivative equations have three intcrprctntions prctations arc which WC notate 6-1, 6-2, and 6-3. only alnbiguous in aF,,,,,, so state Dcrivativc intcr- 6-3 rcfcrs to the state in which aF,,,,,, - - In state 6-3, cvcry quantity -- . ing its lcro threshold, since [ZC] = -ax for all quantities. is approach- As we have no information about which can happen first, or happen together, all possible combinations of transitions need to be considered. As there 88 arc 4 possible transitions, thcrc are 24 - 1 possible choices. Only 3 of those 15 possibilities arc rcaliyablc bccausc 12 of the resulting states arc contradictory. This simple rule eliminates the need for more sophisticntcd rules often used for transition ordering. For cxamplc, (Williams, 1984a) (Williams, 1984b) uses the rule: if z and y arc heading for a threshold, and z = f(y) holds at the threshold as well. transitions in z and y co-occur. All applications of this spccialiLcd rule as well as many others are covcrcd by the contradiction avoidance rule. Figure 3 illustrates sonic of the possible states and all state transitions gcncratcd by the algorithm using just rules 0 and 1. As we don’t I~;I\,c any information about the 2nd order dcrivativcs. WC first ass:nnc all transitions bctwccn first order solutions arc possible. Figure 3 : State Transitions of the Spring-Mass System After applying contradiction avoidance rule, thcrc arc still a large number of impossible transitions shown in this graph that can be climinntcd by rule? 2 through 6. Each numbcrcd arc is impossible, the number indicates which rule climinatcs it. x = k > 0, thus if $ > --00, it will take some time for z to drop for zero. J-lowcvcr. y bccomcs grcatcr than 0 in an arbitrarily short time period, so this happens first. In thd case of state 3, [v] = 0, &J = - and [La3,] = --,W,,ass = + so [TJ] = - occurs first. Thus, state 3 can transition to 4 but not 5. ‘l’his is cquivalcnt to cast (a) of the equality change rule of qualitative process theory (Forbus, 1982). RULE(3): D~RIVA’I’IVE CONTINUITY All dcrivativcsf must bc continuous in a continuous system with well-behaved inputs. This rule has conscqucnccs even if the derivatives arc not computed. Although the dcrivativcs may bc unknown, the quantities must still vary continuously. This rule has consequcnccs both within intcrprctntions and bc- twccn intcrprctations. All transitions bctwccn stntcs labclcd n-l and n-3 arc impossible bccausc aF,,,,, cannot continuously change bc- twccn $- 2nd -. More intcrcstingly, this rule imposes a scnsc of direction on the state diagram. State 5 can transition to state 6-1, but not vice versa. f-or state 5 to transition to state 6. aF,,,,,5, must bc + so that [F,,,(L,5b] can change from 0 to +. For [F,,,,,..] to change back to zero, aF,,,, ,, must bc - (i.e., system must bc in state 6-3. For, 6-3 to transition to 5, LiF,,,,,.5,5 must change from - to + which is ruled out by the rule. As a conscquencc of this rule it is possible to prove that oscillation bctwccn two st;ltcs is not possible unless both of their dcrivativcs arc ambiguous. RUIX(4) : l)l~~l~I\‘/\‘I’I\‘C: INS’I’AN’J CJIANGl’ RUM: All quantities must obey rule 2, cvcn if their dcrivntivcs arc unknown. Thus transitions between a situation whcrc &c = 0, ay = -/- and 32 = -/-, dy = 0 arc impossible. As a conscqucncc transitions bctwccn states 5 and 6-2 arc impossible. This contradicts cast (b) of the equality change law of qualitative process theory (I’orbus, 1982), and thus WC product a diffcrcnt analysis than hc dots. JIy rules 1-4 it is possible to prove that oscillation rcquircs a minimum of 8 states. I NS’L’A NTS RUIX(2): INS’J’AN’I’ CJIANGK RULE: In state 3, the mass is not moving, but the force of the spring is pulling to the left. ‘I’hu~, the mass has moved as fihr as possible to the right. Envisioning predicts states 4 and 5 as possible successors. ‘J’hc transition from state 3 to state 5 is impossible. In stntc 4, the mass has stnrtcd moving to the left in rcsponsc to the spring pulling it towards the wall. In state 5 the mass has a velocity to the Icft, but there is no net force on the mass. l’hus the mass must have moved close to its equilibrium position whcrc the weakened spring force pcrfcctly balances friction. To transition from 3 to 5 the mass would have to have moved close to its equilibrium position at the same instant it began to move. More formally, any change in any quantity from zero happens before any change of a quantity to zero. Consider two quantities (at some time) [z] = +,ax = - and [y] = O,r3y = +. As [x] = +, Any state in which a quantity is constant and its dcrivativc non- zero is momcntnry (e.g.. [x] = 0, 3.~ = +). Marc gcncrally, if arty zero quantity changes. the state is momentary. As a conscquencc the ontology for time is expanded to instants (corresponding to momentary states) and intervals. If more than one zero quantity has a non-zero dcrivativc, WC can cithcr think of them changing one at a time or all at once. 11~ modeling what happens as a scrics of instants we get an intuitively satisfying scnsc of causality: by grouping thcsc instants in a single instant WC get consistent transitions with simultaneous changes from instant to following time interval. As a conscqucncc of rules 2 and 4, if [z] changes from 0, no other ay can change back to 0, so any tcndcncics to change will persist, *Some transitions corresponding to operating region shifts (not nmbiguitics) need to bc handled with some cart. 1:or cxamplc. a piccc-wise linear model has undcfincd dcrlvativcs at the joints. and the ultimate effect remains the same. In fact, it is intcrcsting to note that if some non-zero quantity has a non-zero dcrivativc, neither the quantity nor its dcrivativc can change in the instant(s), and the transition is considered in the following interval. Unfortunately, the qualitntivc integration equation [z,,~~~) = [hLrmLt] + a%ment is invalid for instants (it can be proved for intcrvnls using the Mean Value ‘I’hcorcm). Suppose one drops a ball. At the moment the ball is rclcascd, it can’t be moving, but immcdintcly thcrcaftcr it is. At the moment of rclcasc it cannot have moved, has zero velocity. and ncgativc accclcration. Qualitatively, [z] = 0, a~ = 0. and a2z = -. So [z] becomes - cvcn though L3z = 0. ‘I’llC correct qlLllitiitiVC integration fill’ illStililtS is [z,,,,r(] = lx current] + anhL,rent whcrc PZ is the first non-zero dcrivativc. This result can bc proven using the Taylor expansion of z(t). ‘1%~ difliculty with applying this rule is that higher-order dcriva- tives may not bc known. Fortunately. it is often easy to tell what order n is ncccssary. n is the qunli&ltivc order of the system which can be dctcrmincd directly from the variables mcntioncd in the equations. ‘I’hc spring-mass equations only rcfcrcnccd forces and vclocitics thus 11o information about instants is to bc gained from SCCOII~ dcrivativcs. ‘I’hc dropping ball cxamplc, mentions three orders of dcrivativcs and thus rcquircs solving for second dcrivativcs. Notice that as the spring- mass system is a second order system WC arc guaranteed that if the system is in state 1, it cannot move out by itself. An alternate solution suggested by (Williams, 1984a) (Williams, 1984b)is to rcwritc the integration rule for instants as [z,,,~] = [%urrentl + ax,,ezt (which cm bc proven from the Mean Value ‘I’hcorcm). If and only if thcrc is any non-%cro Pz at the instant, xnezt, %d, *a*, an-1x7Lczt will be non-zero in the following interval (by integration). The two problems with Williams’ formulation are: first, it rcquircs knowing what happens next to know what happens next; second. it is conscqucntly dificult to tell whether the current state is momentary or not. Hc avoids the second problem by an axiom requiring that intervals and illStilIltS must altcrnatc. l’hcreforc it is always possible to tell whcthcr the current state is momentary. 13~ rules 2 and 4, if &E is non-zero at an instant, it is non-zero in the interval after. so the only difficult case occurs if [z] = ax = 0. This cast is handled by considering all states that satisfy [z,,,,,] = altnczt as possible next states. Rules 0 through 4 apply to all dcrivativc orders. Recall that higher-order qualitative derivatives arc not dcfincd in terms of lower order qualitative dcrivativcs as is done in conventional calculus. a(&)) makes no scnsc. This was illustrated for the valve equation. The higher-order qualitative dcrivativc must bc defined in terms of the quantitative dcrivativc. Ps = [$$I. For brevity wc somctimcs use aox = Ix]. For linear systems, computing higher-order derivatives is easy. DifTcrcntiating a linear equation products a linear equations so the form of the equations dots not change. As thcrc arc finitely many sohtions to t.hCSC equations, it is easy to represent in a finite structure all higher order derivatives. h the mass-spring system is linear, diffcrcntiating the models does not change their essential form: LM-*v = p~~,,~~~, ~~~~~~~~~~~~ = -Pv, a"+'F frrchon = -ant-l v = -PF,,rass, and a"+'F tnass = anflFj,tctm + a7'+1J?~,,,,, = PF,,j, - PV ‘l’ablc 3 summarizes the solutions for state 6. 1 2 3 3 3 au = +++++ aF/rrcttun = - - - - - a&mng = +++++ aFnLas.9 = + 0 + 0 - 1 1 1 3 3 a2v = +o--- a2Ffr*ctron =- 0+++ a'LFsprrny = - - - - - a"F,,*,, = - - + 0 - Table 3 : Higher-Order IIcrivativcs of State 6 ‘I’hcsc second-order dcrivativcs SINJW tililt state 6-1-l CiII1 transition to stale 6-2-1, but that 6-2-l cannot transition back. In terms of higher-order dcrivativcs the rules can bc summnri/.cd succinctly: (0,3) ang,,,,t = a7h,,,,,,,,t -1 a~+~~~~,.vrrrr,l, 772 first non-zero derivative. (1) Avoid contradictions at all dcrivativc orders. (2,4) Any change from Tcro happens first. ‘I’wo states arc difl’crcnt if they dill’cr in any known Pz. A state is momentary iff a% = 0 and a”+’ z # 0. whcrc n+l is the qualitative order of the system. RULK(6): NO CHANGK ‘I‘0 ALI, %I’RO I)I’RIVA’fIVES A transition (subject to the same caveats as rule 3) cannot go from a state whcrc a quantity is non-zero to one whcrc it and all of its dcrivativcs arc zero. ‘I’his rule climinntcs tlic transitions from states 6-3 and 12-1 to state 1. because all Pz arc zero in state 1. This rule is justified by the Taylor expansion. ‘I’akc for cxamplc [v]. v(t) and all its dcrivativcs arc continuous over all the states (thcrc is no change in operating region so there is no possible way for a discontinuity to occur). In state 1, v and all its dcrivativcs arc zero. Howcvcr, WC can write v as a ‘l’aylor expansion around some time point when the dcvicc is in state 1. As all the derivatives of v arc zero, v must ncccssnrily bc zero cvcrywhcrc. ‘l’hus, if the dcvicc is in state 1 it will always remain in state 1 and has al~trz),s beers irl sfaie /. ‘I’hcrcforc the transition from 6-3 to 1 is impossible as v is non-zero in state 6 and zero in state 1. QUA LI’l’h’l’IVl+: vs. QUAN’l’I’I‘A’I’IVE ‘I’hc quantitative solution to the spring-mass system is of the form evkt&n(wt), i.c.. a damped sine wave (Figure 43). Figure 4b illustrates the qualitative shtc diagram after all the rules have been applied. Qualitative reasoning obtains a qualitative description of the 90 behavior of the mass-spring system without recourse to quantitative methods. Figure 4 : Qualitative and Quantitative Behavior 13clow we give an English description of the states indicated in Figure 4. If the system starts in State 1 it remains thcrc, and if the system starts in any other state it cycles through states 2 through 13 ( “*” indicate instants). (1) A quicsccnt state which the system cannot Icave. (2) ‘I’hc mass is to the right of equilibrium and dccclcrating. (3) The whole system is stationary at the cxtrcmc right end of motion.* (4) The spring pulls back the the mass towards equilibrium. (5) Near equilibrium spring force has become weak, equaling friction.* (6) l:riction dominntcs spring force. (7) Mass rcachcs equilibrium position, but momentum carries it past.* (8) Spring begins to compress, system dccclcrates. (9) Spring is completely comprcsscd to the left, mass stationary.* (10) System begins rightward movcmcnt towards equilibrium. (11) Near equilibrium spring force has bccomc weak, cqualling friction.* (12) 1;riction dominates spring force. (13) Mass reaches equilibrium, but momentum carries it past to the right.* OWN PRORI,l’MS WC prcscntcd the fundamental laws of time-like behavior: qualita- tivc integration/continuity, contradiction avoidance, moving off in- stants and moving to zeros. ‘I’hcsc simple, but gcncral and powerful laws capture what would othcrwisc rcquirc sophisticated infcrcncc tcchniqucs. Figure 4b dots not include possible transitions to quiescence. This is technically correct (using Newton’s Law, Hooke’s Law, and Friction) - the exponential decay in oscillation amplitude approaches zero asymptotically. However, common-sense tells us that the oscilla- tion must cvcntually halt. What kind of qualitative equations correctly model the common-sense physics that a transition towards quiescence is possible: perhaps a model of Coulomb friction, or some sort of qualitative “quantum” mechanics? (Forbus, 1982) and (Williams, 1984a) dcfinc this problem out of cxistcncc by assuming an axiom that all approached thresholds arc cvcntually rcachcd. Although guarantee no thil t any particular interval will end. ‘I‘hc ambiguity of qualitative analysis dots not allow us to deduce that state 2 ends. For cxamplc, WC could design a spring whose restoring force rapidly damped out to zero asymptotically with time. Such a spring still obeys the qualitative Hookc’s I,aw, but the system might ncvcr stop moving to the right (i.c.. the velocity would approach 0 the momentary states of Figure 4b must end, there is asymptotically producing no oscillation). Of course, if WC knew the spring constant was greater than some fixed landmark (true for non- pathological springs) thcrc would that oscillation is mandatory - bc enough information to dctcrmine sophisticated this rcquircs a more qualitative physics. ACI(NOWLEI)GME:N’I’S WC had many fruitful discussion with John Sccly Brown, Brian Williams and Ken Forbus. Sanjay Mittal commcntcd early drafts. [I] dc Klccr. J.. “How Circuits Work,” to appear. (dc Klccr and Brown, 1984) dc Klccr, J. and J.S. Brown, “A Qualitative Physics Based on Contlucnccs.” to appear. [2] dc Klccr. J. and J.S. Brown. “Foundations of Envisioning,” Proceedings of de Nnrional Cotlference on Arl$cial Intelligence, pp. 434437. 1982. [3] Forbus, K.I)., “Qualitative Process ‘I’hcory,” Artificial Intclligcncc I,aboratory, AIM-664. Cnmbridgc: M.I.T., 1982. [4] Forbus, K.I>., “Qunlitativc Reasoning about Physical Proccsscs,” I’rocmiings of I/W Scvettth Itr/entniiorml Joirrt Cottfcrcttce ott Arltjicial Irtrclligrt~ce. pp. 326-330. 1981. [5] fklaycs, P.J., “‘l’hc Naive Physics Manifesto,” in Experf S’yslnt~s in /he Alicroelccrrotzic: Age. cditcd by 11. Michic, Edinburgh University Press, 1979. [6] Kuipcrs, IL “Getting the Envisionmcnt Right,” Proccedittgs of /he Noriotral Cot!fcrctrcr on Ar!$cinl Itttclligettcc, pp. 209-212, 1982a. [7] Kuipcrs, I)., “Commonscnsc Reasoning About Causality: Deriving Behavior from Structure, ” Tufts University Working Papers in Cognitive Scicncc No. 18, May 1982b. [8] Williams, IIC., “Qualitative Analysis of MOS Circuits,” to appear in Ar~ifcinl Itttclligcttcc, 1984a. [9] Williams, B.C., “‘l’hc USC of Continuity in a Qunlitativc Physics,” Procccditlgs of the Narionai Cortfcrettce on Arlifcial In~clligettce, 1984b. 91
1984
12
295
TOWARDS A BETTER UNDERSTANDING OF BIDIRECTIONAL SEARCH Henry W. Davis Randy B. Pollack Thomas Sudkamp Wright State University ABSTRACT Three admissible bidirectional search algorithms have been described in the literature: A Cartesian product approach due to Doran, Pohl's BHPA, and Champeaux and Sint's BHFFA2. This paper describes an algorithm, GP, which contains the latter two and others. New admissibility results are obtained. A first order analysis is made comparing the run times of Cartesian produc$ search, two versions of GP, and unidirectional A . The goal is to gain insight on when bidirectional search is useful and direction for seeking better bidirectional search algorithms. 1. INTRODUCTION A problem with Pohl's BHPA [7,8] was that search trees did not meet near the middle and the algorithm performed poorly. To remedy this, Champeaux and Sint [2,3] proposed using a "front- to-front" heuristic. Their first algorithm based on this, BHFFA, solves the problem of the search trees meeting in the middle and generally gives a higher "quality" of solution than unidirectional A. Unfortunately it runs longer and is not admissible. To deal with the admissibility problem Champeaux Cl] has described a somewhat complicated algorithm, BHFFA2, which also uses "front-to- front" heuristics. The algorithm of section 2, GP ("generalized Pohl"), restates Pohl's BHPA with greater generality and adds additional features. One feature (step (2.2)) gives high symmetry to the search. The result is that GP includes BHFFA2, as well as BHPA. For completeness we add a feature which allows GP to be used with graphsearch as well as ordered search procedures. We consider other dynamic heuristics than the one used by Champeaux and Sint. One of them ((2) in section 2.3) makes Pohl's original BHPA admissible while assuring that the search trees meet in the middle, the major goal of BHFFA2. Another (GP2 in section 4) reduces some of the list processing and H- calculation overhead from the traditional OPEN- OPEN approach. In section 3 we show that GP is admissible in a variety of situations not previously considered. We also state a result that one may prune OPENU CLOSEDin the latter stages of a GP-search without affecting admissibility. Section 4 makes a first order analysis of where several algorithms spend their run time. Two versions of GP, Cartesian product search, and unidirectional A i are compared in several heuristic situations. The number of nodes expanded, the number of H-calculations made, and the amount of list processing are examined in a worst case analysis using a search space previously considered by Pohl, Champeaux, and Sint. The results suggest that, compared to unidirectional search, GP performs favorably with respect to nodes expanded and list processing, but unfavorably with respect to H-calculations unless the heuristic is very weak. The fact that in most of the categories considered (Table 3) some bidirectional algorithm performs better than unidirectional search suggests that substantial improvement in admissible bidirectional search algorithms may be possible. We have provided a proof of the main admissibility theorem. Due to a shortage of space we do not include proofs for other results mentioned. They will be submitted for publication. 2. A GENERALIZED POHL ALGORITHM FOR BIDIRECTIONAL SEARCH 2.1 Assumptions and Notation. Assume that the search space is a locally finite graph, G, whose arc lengths are bounded uniformly above zero. Arcs may be traversed in either direction. We seek a path connecting s, t & G. The X F x EXPAND x-OPEN - x CLOSED - H*(m,n) H(m,n) g,(m) g:(m) h*(m) hi(m) f,(m) following notation is used: s or t s if x denotes t and t if x denotes s. A node expansion routine described later. Set of nodes which have been discovered (generated) by x EXPAND and are awaiting possible expansron. Set of nodes which have been x-expanded (and are not currently in X-OPEN in the ordered search case). Actual cogt of a least cost path from m to n. H (m,n) =a~ if no such path exists. Heuristic estimate of H*(m,n). Cost of least cost path so far found from x to,m by GP. Same as H (x,m). Same as H*(m,?). sheuristic function which estimates h,(m). g,(m) + hx(m> From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. c(p,m) Cost of a least cost arc from pto m in G when such an arc exists. Assume c(p,m) q c(m, p> . PTx(m) Parent of m with respect to a search tree rooted at x. AMIN Cost of least cost path between s, t which the algorithm has so far discovered. Initially AMIN = w . x TESTi x TESTi is one of the sets x OPEN, - X-CLOSED, x OPENLI x CLOSED, x CLOSED u {xl, i=172. These-sets are used to determine when to update AMIN. Admissibility results hold for different combinations of x TESTi - values, i=1,2. 2.2 GP - We use the following node expansion routine: X-EXPAND a node p& x-OPEN For each neighbor m of p do If rn+ x-OPEN\> x_CLOSEythen -~~?lYZg"(,, + c(p m> x OPEN+ :: OPENd &1 Ifi. x OPfiti X-CLOSED with g,(p) + c(p,m)< g7m) then g (m)ulS g (p> c(p,m> +f (mk- t: IfXm & x CLOSED then 9-evitalize mtr via ordered-search or graphsearch (see below) Remove p from x-OPEN x-CLOSED+ x_CLOSEDU (~1 If ordered search is being used then llrevitalize m I1 means to place m into x OPEN. In the graphsearch case it means to recursively update the gx and PT, values of all of m's G-neighbors and whatever of their G-neighbors that have been so far generated. See [6;2.2.21. Algorithm GP (1) x-OPEN+ (x}, gx(x)Q- 0, PTx(x)+ x, x=s,t. AMIN+&> . (2) Do steps (2.1) through (2.4) until either (a) or (b) is true: (a) x OPEN = 6, x=s or t. (2.1) (b) AqIN5 min Ifx(y): YE X-OPEN), X=S or t For x=s or t choose some w & x OPEN such that fx(w) = min {fx(yT: y E X-OPEN). X-EXPAND w. In steps (2.2)- (2.4) x has the value assigned in this step (s or t). (2.2) [This step is optional.] If ordered search is being used then anFor all of (a), (b), (c) below may be performed. (b),(c) may be performed many times. (a) If w E F OPEN then x EXPAND w. (b)Anode Tin x'?&Ejb is removed from ?? CLOSEDand placed in T OPEN provided that (within thissame instance of the step 2 do-loop) either g,(n) is lowered or n is assigned its first glF Value. (c) Same as (b) with x, x interchanged. (2.3) For each new member y& x_TESTlfljT TEST2 AMIN+ min(AMIN, g,(y) + g,(y)>. - (2 (3) .4) [This step is only needed if graphsearch is being used] for each y& x TESTlA !? TEST2-such that g,(y) has been lowered ApIN&-- min(AMIN, g,(y) + g,(y)). If AMINdocl then report the solution path associated with the current AMIN; else report failure. 2.3 Special Cases of GP ---- To obtain Pohl's BHPA [7,81 set x TEST1 = x CLOSED, x TEST2 = x CLOSEDUix), omit steps (2.2), (2.4)and use ordered search. The reason x TEST2 must be x CLOSEDU 1x1 and not x CLOSED is so that GP works-when it is run unidirectionally. Pohl got around this by initially closing both s,t. This technical alteration is the only way GP differs from BHPA when steps (2.2), (2.4) are omitted. Pohl showe$ optimistic that when the h, are (ie h, $ hx) BHPA is admissible. (In F.81 consistency is also assumed but C7;page 991, points out that consistency is not required.) The Champeaux-Sint tlfront-to-frontll heuristic function is given by (1) h,(z) = min{H(z,y) + gp(y): ye Y-OPEN]. To obtain BHFFA2 from GP one uses (1) and makes appropriately a number of the choices left arbitrary in GP, namely: (a> lb) cc> (4 (e> In (2.1) choose x=s or t so as to avoid expanding a node from s OPENn t OPEN whenever possible. When in (2.1) GP is forced to expand a node w G s-OPENP\t-OPEN it chooses w to be that node (or one of those nodes) in {y: yc x OPEN, fx(y) is minimum) whose g,(w) + g,(w) value is as small as possible. (2.4) is omitted and ordered search is used. Champeaux's routines EXPAND1 and EXPAND2 are obtained in GP by performing steps (2.2a), (2.2b), and (2.2~) whenever possible, Set x TESTi = x-CLOSED, i=1,2, x=s,t. It is shown in Cl] that if H is optimistic (2, Hs H ) then BHFFA2is admissible. It is not hard to find examples of H being optimistic while the heuristic function holds) is not optimistic. h, based on H (ie (1) -m Interestingly there is a simple "front-to-front" heuristic function which is optimistic. Therefore it makes Pohl's BHPA admissible while assuring that the search trees meet in the middle. We show in an appendix that when H is optimistic so is (2) h,(z) = I- min(H(z,y) + g?(y): Y G K-OPEN} if z & Y-CLOSED 1. min{gz(z))W IH(z,y) + g?(y): y&;ji OPEN) otherwise The definition of equation (1) can be extended as follows: Let S(x)C x OPENLIx CLOSED, x=s,t. Define - - (3) h S'H(z> X = min(H(z,y) + g;;(y): ye S(K)} We call S(x) a target set and say S is admissible -- if GP has an admissibility theorem for h, SfH hen H is optimistic, This does not imply that hx '9' is 69 optimistic, although it may be. Pohl and Champeaux, respectively, showed that {x1 and x OPEN are admissible. Another admissible set is S;-(x) = {x1U x-CLOSED; in fact, for this S, h S#H is optimistic (see appendix). An advantagz of Sl(x) over x OPEN is that it's smaller. The problem with such a relatively static non-frontal target set is that H-calculations may become fixed to an interior node (eg, s or t) which erroneously looks close to the opposite front due to an unseen hill. S2(x) = {PTx(y): y& x-OPEN1 is smaller than Sl(x) and doesn't have the interior node problem. It is admissible. One may extend the notion of target set to allow dependence on two variables and, thereby, incorporate (2) into (3). Let S(j;,zi = ( i OPEN if z$ F CLOSED {?,c/ ? OPEN otherwise Then S is admissible: Replacing S(x) in (3) with S(j;,z) gives (2). 3. ADMISSIBILITY 3.1 Admissibility Theorem. Theorem GP is admissible is either (i) or (ii) hold: (i) h, is optimistic and x-TEST2 f x-CLOSED, x=s, t (~0;l&pPA). (ii) h is given by (3) in section 2.3; H ixS optimistic; S(x) is either x OPEN or IPTx(Y) : yc X-OPEN); and at leafi one of (a) or (b) hold: (a) Ordered search is used and (2.2a), (2.2b), (2.2~) of GP are executed whenever possible (Champeaux's BHFFA2). (b) It is the case that (bl) x TEST2 f x CLOSED, x=s,t, and (b2) either x-TEST13 x-OPEN for x=s,t or x-TEST23 x OPEN for x = s,t. - The theorem may be proven by technical modifications of the original admissibility arguments in [53, a testimony to the robustness of those arguments. One first proves the classical "partial solution on open" lemma and uses it to eliminate each of the following cases: (1) GP never halts; (2) GP halts with no solution; and (3) GP halts with a non-optimal solution. We prove here GP admissibility only for assumptions (ii). Assumption (i) may be handled along the lines of [7;pp 98 ff]. Lemma Assume (a) there is a path in G connecting s,t; (b) GP has not yet found a minimal cost path; (c) GP has just completed step(l) and zero or more iterations of the outer loop. Then, if (i) or (ii) holds, there are nodes m(x) 4 x OPEN such that f,(m(x)) f L, where L is the cost of an optimal path (x=s,t). Proof We give the proof for (ii) using Champeaux's argument in [l;Lemma 11. Assume, first, case (iib). Let// = (.3=x0, xl,...,xm = t) 70 be an optimal path. Not all the x are in x CLOSED because otherwiseA would hgve been discovered (x=s,t): this is because (bl) assures that at least ji& x_TESTlf? z-TEST2 so steps (2.3), (2.4) would have foundp . Take j least and k greatest such that x.f s-OPEN and xk& t OPEN. j f- k because otherwi -d e, by (b2), /o wouldhave been discovered. %ince x1 is s CLOSED for all 1 < j, g (X.) Assuming = g (x.) and siFilarly for xk. 8(x-3 = x-O?'ENj we have fs(xj) = gs(xj) + hs(Xj) C go + H(xj, 'k) + gt(xk) C go + - H*b (PT ?;) x k > + g,i(x ) = L. The argument for S(x) = r t. yc x-0 b EN) uses x instead of xk if 'k The argument for ft !z'similar. Now assume (iia). The reason we don't need (bl) to assure that not all x QJJ are x CLOSED is that, due to GP's step (2.2a)q if x' werex CLOSED it would also be x' CLOSED causing,/to be discovered in step (2.3). The reason' we don't need (b2) to assure that j ,C k is that steps (2.2a), (2.2b), (2.2c), together, assure us that x OPEN.0 x' CLOSED =,K The rest of (iia) is like (Eb). The-completes the lemma's proof. To prove the admissibility theorem for GP we must eliminate the three cases mentioned above. Cases (1) and (2) are handled as in the proof of theorem 1 in Cl]. To dispose of case (3) we must show that it is impossible for GP to halt with a non-optimal solution. Suppose that, on the contrary, GP halts with a path of cost L'> L. Then AMIN=L'. GP has not found a path of cost less than L' because otherwise AMIN would be less than Lt and the corresponding path would be reported in step (3). But then, by the lemma, when GP finished its last outer loop there were nodes m(x)& x OPEN satisfying fx(m(x))5Lc:Lt = AMIN, x=s,t. This is impossible because then the halting condition at step (2) could not be triggered. Thus case (3) is impossible. This completes the admissibility proof. 3.2 Admissible Pruning The final GP search stage begins when some solution is found, at which point AMIN becomes finite. One may now prune x OPENU x CLOSED reducing target set sizes and the amount,f list processing: It can be shown that, under assumptions (i) or (ii) of section 3.1, GP remains admissible if (a) new nodes with f,-values >, AMIN are not kept, and (b) old nodes with f -values 1 AMIN are removed from x OPENux CLOSE:. - - 4. A FIRST ORDER COMPARISON In order to get a first order comparison of the total run time of several bidgectional search algorithms and unidirectional A (UNI) the worst case behavior of these algorithms was analyzed in a particular search space. The results are summarized here. To a first approximation the run time may be written as an expression of the form * N + .3 H + r"L, where uc.,fl , r are problem specific parameters, N is the number of nodes expanded, H is the number of H-calculations performed, and L is the total length of all lists searched. For example, if H-calculations are cheap while node expansion requires a lot of computer time,theno( should be large andfl small. Our analysis focuses on the values of N, H, L for several algorithms. The search space used was also studied by Champeaux and Sint [31 and Pohl [7;Chapter 71, all of whom calculated N for several algorithms: Let G be an undirected graph containing a countable collection of nodes; two nodes, s, t, have b edges (b > 1) and there is a path of length K between them. From all other nodes emanate b+l edges. There are no cycles and all edge costs are one. We have tabulated data about four algorithms: UNI, X, GPl and GP2. X is a bidirectional search obtained by performing UN1 on G x G. It is apparently due to Doran [43 and we use the precise description found in C2;section 2.11. GPl is a version of GP which uses ordered search, skips steps (2.21, (2.4) and sets x TESTi = x OPEN, i=1,2. We assume the front-to-front heuristic (1) of section 2.3 and alternating direction. When direction is changed GPl never recalculates H-values. Instead it searches a matrix it maintains of relevant H-values. This method was used in a program by Champeaux and Sint [33. We assume that either H(u,v) = H(v,u) or, if not, H returns both values. We include GP2 to illustrate what happens when crucial changes are made in GPl. It is like GPl except that a smaller target set is used and it handles differently the problem of updating f- values on OPEN when direction changes. When a new TABLE '1 GPl 1 GP2 1 v (b2-b)N'/4 (2b-1>N2/4 X UN1 b2N bN (b2+b-l)N2/2 (b4+b2-1)N2/2 (b2+b-1)N2/2 node is generated its H-values are calculated against the target set S(x) = IPT,(y): y G x OPEN). Suppose we change direction to ‘j7 and mustlow obtain the new f-values for nodes on K -OPEN. Instead of recalculating H(u,v) for all u E F-OPEN, v& S(x), we update each hjl(u) with respect to the new members of S(x) that were added since we were last going this direction. The effect is that hji(u) is being calculated with respect to a target set larger than S(x), but this does not effect admissibility. Old nodes have f- values which are a little out-of-date but, if they are erroneously expanded, the children will have accurate information, hopefully preventing the faulty behavior from continuing. The purpose of this is to cut down on the list processing GPl does to maintain its matrix of H-values. Table 1 shows H, L values for the various algorithms in terms of N. We have kept only the highest order terms in N so the entries reflect assymptotic behavior. The X entries are closely related to the UN1 entries because X behave essentially like UN1 with a branching factor of b ,3 instead of b. The H(GP2) calculations were made using a worst case assumption that G x-OPEN11 = I {PTx(Y): Y Ix-CLOSED). The smaller GP2 target set unsurprisingly caused H(GP2) to be smaller than H(GP1) by essentially a factor of b. The casual update procedure for GP2 versus GPl the list processing from O(N3) to O(N2). reduces While these appear good, one must remember that one or both may significantly reduce the heuristic power of GP2 causing N(GP2) to be greater than N(GP1). We have only begun empirical studies which would reveal if this is true. TABLE 2 ! Perfect Knowledge * H**/(~+s) H*?L 6 No Knowledge GP K 2b(K-l)S -p Kbr5 1 &K/2)-1 X K/2 i bK$ -2/4 (K/2)b16] bK UN1 K bkb -P KbC6 1 bK Perfect Knowledge H**/(l+$ > H*+ d No Knowledge TABLE N X<GP=UNI GP<X<UNI X<GP=UNI GP<<X=UNI 3 H UNIiX<<GP X<UNI<<GP X<UNI<<GP GP<UNI<X ifS>\r27 UNI<X<<GP if'i;<p X<UNI@S> J L GP=UNI<X GP<UNI^Z>1/2 GP=UNI<X GP<<UNI<X GP < X 71 Table 2 shows the value of N for different algorithms and different heuristic situations. We have kept only the highest order terms ialKK (gath length). Also expressions of(,t_hle,forrn b /(b -1) have been approximated byb . GP represents either GPl or GP2 since both have the same N values given our heuristic assumptions. Our heuristic assumptions are as follows: &olumns 1, 4, respectively, assume perfect (H=H ) and no (HrO) heuristic information. (In the latter case we assumed K even and that the solution is found as late as possible.) Columns 2, 3 assume worst case bounded errgr; column 2 is relative: H(m,n) q ( H (m,n)* (1+6), if both m, n age on the solution path H (m,n)/(l+ S), otherwise; column 3 is absoAute: H(m,n) = ( H (m,n) + s , if both m, n aie on the solution path H (m,n) - 5 , otherwise. We assumes > 0, j3 = 6 (6+1)/(5+2), and CS 1 is the integer part of 5 . We were surprised at how closely all the algorithms performed except when HE 0; in this case GP excells. We were also surprised at the good performance of X. Table 3 summarizes how the algorithms compare relative to N,H,L in various heuristic situations. The results are obtained by substituting Table 2 entries into Table 1. We set GP = GP2 since its Table 1 entries are best. If A,B are algorithms then A < B means that B performs worse than A by a constant factor; A<<B means that B performs worse than A by a factor that grows exponentially in K. In Table 3 GP generally performs as well or better than UN1 with respect to N and L. Unsurprisingly, the problem lies in time spent doing H-calculations. The table suggests that when heuristics are worse than bounded error we could expect GP to perform better than UN1 with respect to N,L and comparably with respect to H. In only a very few entries does UN1 beat both X and GP. This plus the better Table 1 performance of GP2 over GPl suggests two directions for improving bidirectional search: Look for smaller accessible admissible target sets and combine the ideas of GP with those of X. Appendix Proof that (2) is optimistic and that (3) is optimistic when S(x)=(x) u X-CLOSED; H is assumed optimistic. Consider the case of (2) figst. Take z g G. If z is not connected to si then h,(z)=@ and there is nothing to prove. Otherwise let ~~=Y~,...,Y,=z be an optimal path connecting z, x. If y. G jT_CLOSE$ for alJ i, then g-(z)=g*(z) so h (z) t g$z) = g,-(z) = h (z), as isxdesiTed. Othe?wise letl;j be least sue 'hthat y.~ x' OPEN. Then g-(y.) = g-(Y.) SO h (z) < H(z,y?) + g;;(y.) ~ H*(z,Xyj)J+ gK TY .? = d betwe n Id h*(g), srnce yjJ z azd ?. is on a optimal path The proof of (3) when S(x)={x)U x_CLOSEDis similar except that the role of yj is now played bY Yj,19 or W if j=O. BIBLIOGRAPHY [ll de Champeaux, D., Bidirectional heuristic search again, J. ACM, Vol. 30, 1983 (22-32). [2l de Champeaux, D., and Sint, L., An improved bi-directional heuristic search algorithm, IJCAI, 1975 (309-314). [31 de Champeaux, D., and Sint, L., An improved b&directional heuristic search algorithm, J. ACM, Vol. 24, 1977, (177-191). C41 Doran, J., Double tree searching and the graph traverser, Res. Memo EPU-R-22, Dept. of Machine Intelligence and Perception, Edinburgh University, Scotland, 1966. C5I Hart, P., Nilsson, N., and Raphael, B., A formal basis for the heuristic determination of minimum cost paths, IEEE Transactions on Syst. Science and Cybernetics, SSC-4(2), 19% --- (100-107). E61 Nilsson, N., Principles of Artificial Intelligence, Tioga Publishing Co., Palo Alto, CA, 1980. 171 Pohl, I., Bi-directional and heuristic search in path problems, SLAC Report No. 104, Stanford Linear Accelerator Center, Stanford, CA, 1969. [81 Pohl. I., Bidirectional search. Machine Intelligence, Vol. 6, edited by B. Meltzer and D. Michie, 1971 (127-140). 72
1984
13
296
A Forward Inference Engine to Aid in Understanding Specifications* Donald Cohen USC Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292 Abstract: An important part of understanding a specification is recognizing the consequences of what is stated. We describe a program that can help a user acquire this understanding. It does this by deriving interesting, though not deep consequences of a set of input axioms, while avoiding (a typically much larger set of) uninteresting consequences. The heuristics for obtaining that effect are described and justified. The program has been used in a symbolic evaluator that helps a user to understand and debug specifications written in the Gist specification language. 1. Int reduction A specification can be viewed’as a set of facts describing an existing or desired system. This paper describes a program called FIE (for “Forward Inference Engine”), which finds interesting consequences of a set of input facts. Such a “kibitzer” program (a term suggested by Elliot Soloway) can help us to understand the system. This is useful either in designing a new system or in trying to understand an existing system. FIE is the underlying inference engine in a prototype symbolic evaluator for Gist specifications [Cohen 831. Humans automatically draw consequences of new facts. A kibitzer does the same thing. To the extent that the results overlap, the kibitzer confirms the user’s understanding. Results that he failed to anticipate may reveal important properties of the system. Results that contradict his beliefs reveal bugs, either in his mental model of the system or in the formal description. 2. Examples Imagine a user trying to specify a domain. perhaps as a step in database design. The kibitzer prompts with “>” and announces results in upper case. The user types in lower case. We start with an example of the kibitzer finding an expected result: >every person has exactly one sex. >no person has a spouse with the same sex. NO PERSON IS HIS OWN SPOUSE. >why? A PERSON WHO IS HIS OWN SPOUSE HAS THE SAME SEX AS HIS SPOUSE. The next example shows an unexpected result: >ships may carry (any number of) cargo objects. >no ship carries both grain and fuel (types of cargo object). >suppose some ship, s, is carrying some cargo object, c. * This research was supported by the Air Force Systems Command, Rome Air Development Center, under contrace No. F30602 81 K 0056, and by the Defense Advanced Research Projects Agency under contract No. MDA903 81 C 0335. Views and conclusions contained in this report are the author’s and should not be interpreted as representing the official opinion or policy of RADC, DARPA, the U.S. Government, or any person or agency connected with them. IF S CARRIES ANY GRAIN THEN C IS NOT A FUEL. IF S CARRIES ANY FUEL THEN C IS NOT A GRAIN, C MUST NOT BE BOTH A GRAIN AND A FUEL. > The first two results are expected. The third seems trivial at first - after all, nothing is both a grain and fuel. However, this is not implied by the specification. (The axioms only imply that no cargo object carried by a ship is both grain and fuel!) An intelligent kibitzer can be expected to refrain from reporting uninteresting consequences. Hence, an apparently trivial output is evidence of a misunderstanding - the kibitzer does not think the result is trivial. Typically, such a result follows trivially from a belief on the part of the user which is not shared by the kibitzer. If the specification is taken to be definitive, this indicates an unjustified assumption on the user’s part. If the user is debugging the specification, this indicates an omission. The ship example comes from a Gist specification which declared grain and fuel as subtypes of cargo, but failed to declare them as disjoint. Notice that this is not discovered until we suppose that there is a ship carrying cargo. FIE tends not to “speculate” very far by imagining situations. Rather the user guides its exploration by providing explicit suppositions. Finally we present an example in which an expected result is not found: >every party has exactly one candidate. >the president is the candidate of the winning party. >if the republican party wins, reagan is the president. > When I wrote this example, I expected to be told that Reagan was the Republican candidate. It turns out that this expectation (which is widely shared) rests on an interpretation of “if . . . then . . . ” which does not correspond to classical implication: the formal specification does not match our intent, The failure of an intelligent kibitzer to report an expected result suggests that the user may be wrong to expect it. This may be a symptom of the user’s incorrect reasoning or of a missing axiom. At this point a user would probably like to ask why the expected result does not hold (or whether it does). Another way to phrase this question, is under what circumstances would the expected result nol apply. This could be (but has not yet been) implemented by supposing that the result is false, and reporting any interesting consequences, i.e., using the kibitzer as a weak refutation theorem prover: >when woul dn’ t reagan be the republ ican candidate? SUPPOSE THE REPUBLICANS DO NOT WIN. > From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. With the exception of English input, all the pieces of the above system exist in prototype form. Instead of English input, we currently use either Gist or predicate calculus. FIE discovers the consequences. In the examples above (except for the unimplemented “Why not ?” segment) it reported all the results shown and no others. The Gist behavior explainer [Swartout 831 is capable of converting the results to English, and to some extent can explain proofs. For an extended example that illustrates FIE’S role in symbolic execution of Gist specifications [Cohen 831, see [Swartout 831. 3. Requirements for an Effective Kibbitzer FIE is the theorem proving component of the kibitzer illustrated above. Unlike a conventional theorem prover, it has no specific target theorem, but rather the more general goal of finding the interesting consequences of its input axioms. We will not attempt to formally define interestingness. Informally, a user should feel that it’s worth his time to read the output. One heuristic is that a result is nof interesting if it follows trivially from other known results. FIE therefore tries to suppress closely related results. The meaning of “closely related” and the user’s influence over it are described later. Other heuristics are related to symbolic execution. For example, Gist allows descriptive reference. It is therefore important to know whether two descriptions refer to the same object. We feel that these heuristics will not prevent FIE from being useful for other applications, though additional heuristics might well be appropriate. In order to provide useful interactive assistance, FIE must be fairly fast . it should rarely take more than a minute or so. This precludes the sort of “deep” consequences that challenge today’s theorem provers. However, shallow consequences may still surprise or interest a user. (After all, human kibitzing is useful even in the absence of deep consequences.) 4. Overview FIE can be viewed as a function that accepts a set of “old” facts, modeling a state of understanding, and a set of “new” facts to be integrated into that model. It returns a set of facts equivalent (in the sense of two way implication) to the union of these input sets. The “interesting results” are the output facts that were not in either of the input sets. Initially the “old” set is empty. Subsequently it contains the results of previous calls. The advantage of dividing the input into two sets is that FIE need not consider interactions among already integrated facts. It simply integrates one new fact at a time. (For efficiency, FIE integrates simpler facts first.) We now describe how FIE integrates new facts. (If you don’t want to see technical details, skip to the end of the paper.) We use terminology common in the literature of logic and automatic theorem proving. Definitions of these terms can be found in [Loveland 781. FIE relies heavily on well known techniques from resolution theorem proving. Most of this paper describes additions and alterations to these techniques that have been useful in the kibitzing application. Facts are represented as clauses. A clause, current, is added to the set of old clauses, old, in three phases, which are described in detail in the following sections: 1. Consider current in isolation: it is simplified and canonicalized, and its factors are found (they will also be added). 2. Consider interactions between current and members of old which justify simplifications (of either). Whenever a clause is simplified, the unsimplified version is discarded and the simplified version is put into the set of clauses to be integrated. 3. Consider interactions between current of old to generate new consequences. members 4.1. Logical language FIE uses a typed version of first order logic: every variable and object has a type. It is assumed that there is at least one object of each type. (If not, the type should be replaced by a new predicate on objects of a super-type and all inputs should be changed accordingly.) FIE relies on external decision procedures to tell whether two types are disjoint and whether one type is a subtype of another. Some objects are further classified as literals which are assumed to be distinct objects. All other objects (including skolem functions) are essentially names which may or may not refer to distinct objects. In the examples below we will use letters near the end of the alphabet (e.g., x, y, z) for variables. Function and predicate names can be distinguished by position. Literals will be capitalized. Where appropriate, terms will be subscripted to indicate type. 5. Processing a Clause in Isolation In the first phase, current (the new clause to be integrated) is simplified. This is important, but mostly mundane from a technical standpoint, e.g., -True --+ False, (P V False V Q) -+ (P V Q), (P V C V P) + (P V Q), (P V Q V -P) --+ True. 5.1. Equality Simplification The algorithm for simplifying equalities makes use of the type structure decision procedures. If the types of the two objects are incompatible the equality is False. If the two objects are different literals the equality is False. Other cases specific to symbolic execution are also recognized. For example, in Gist it is possible to create and destroy objects. An object that is created must be distinct from any object that was known to exist before. If none of these apply. the equality is ordered so as to make it preferable to substitute the first term for the second: constants are preferred to variables, terms of more specific type are preferred to terms of more general type, etc. As a last resort, all expressions are ordered alphabetically.** 5.2. Substitution for Restricted Variables The next two steps simplify results that don’t seem to arise very often in normal theorem proving. The first substitutes for variables in inequality literals, e.g., -( = a x) V (P x) is rewritten as (P a). In particular, if the inequality is between a variable, x, of type tl, and a term, a, of type t2, where a contains no variables and t2 is a subtype of tl, then the inequality literal is discarded from the clause, and all occurrences of x in the remaining literals are replaced by a. In part, this rule is used to apply substitutions computed in the generalized resolution procedure described below. 5.3. Equality Canonicalization The next step is analogous but allows inequalities between non- variables, e.g., -( = a b) V (P b) becomes -( = a b) V (P a). Intuitively, someone who knows “if a= b then (P b)” also knows 57 “if a = b then (P a)“. In this case the inequality literals must be kept. While the previous rule is a clear simplification, this one is a canonicalization, allowing clauses to be recognized as equivalent. 6. interactions that Simplify The second phase uses information in one clause to simplify another clause. We describe modifications to the standard procedures for subsumption and equality substitution. 6.1. Conditional Equality Canonicalitation First the clauses in old are used to further simplify and canonicalize current via substitution of equalities. This may be regarded as an efficient restriction of paramodulation - doing in parallel all paramodulations which can be viewed as simplifying. A special case is demodulation: if we know (= a b), the clause (P b) is rewritten as (P a). More generally R V ( = a b) can be used to rewrite R V (Q b) as R V (Q a). The intuitive justification is that someone who knows “if P then a= b” will consider “if P then Q(b)” equivalent to “if P then Q(a)“. As the name suggests, conditional equality canonicalization is especially valuable to FIE in dealing with conditional statements. In general, we argue that if the clauses C and D combine to yield E, then A V C should combine with A V D to yield A V E. This holds for FIE’S generation of new consequences (described later), and we think that it would be appropriate for resolution theorem provers in general. (Note that this is not true of ordinary demodulation.) Incidentally, these substitutions do not always yield unique results. Given P V (= a c) and Q V ( = b c) we can rewrite P V QV (R c) in two different ways. We have done nothing about this. The final generalization is that if Rl X2, then Rl V ( = a b) can be used to rewrite R2 V (Q b) as R2 V (Q a). The previous rule is obtained if implication is only recognized between identical formulae. Subsumption is the obvious candidate for a stronger recognizer of implication. recognizer of implication. As an example, suppose 1. no box has two distinct locations, As an example, suppose 1. no box has two distinct locations, and 2. every box is at locationl. These imply 3. no box is at rn~y and 2. every box is at locationl. These imply 3. no box is at rn~y location other than locationl. location other than locationl. One feels intuitively that fact 3 One feels intuitively that fact 3 implies fact 1. The general rule allows fact 3 to rewrite fact 1 and implies fact 1. The general rule allows fact 3 to rewrite fact 1 and subsume the result: subsume the result: l* (= Yloc ‘& v -tLoc ‘box $0,)) ’ 1. (= Yloc qoc) v -(Lot Xbox Y,& v -tLoc ‘box ‘,,,c) -tLoc ‘box ‘,,,c) is canonicalized by is canonicalized by 2. (= 2. (= lo” Y,,,) v -lLoc ‘box Y,,,) loci Y,,,) v -lLoc ‘box Y,,,) to yield (we have arranged to use the null substitution) to yield (we have arranged to use the null substitution) ( q loci ‘& v --CL” ‘ bo⌧ Y,o, )) v (= loci ‘,oc) v --CL” ‘box Y,o,)) v -tLoc ‘ bo⌧ ‘ , oc) -tLoc ‘box ‘,oc) which is subsumed by clause 2 (using xlnp for y,,,) which is subsumed by clause 2 (using xloc for y,,,) .-I .W” This final generalization is relatively expensive in execution This final generalization is relatively expensive in execution time. For example, one subsuming substitution may fail while time. For example, one subsuming substitution may fail while another succeeds, e.g., Px V a= b can be used to rewrite Pa V another succeeds, e.g., Px V a= b can be used to rewrite Pa V Pb only by substituting a for x. However, it is easy to devise cheap Pb only by substituting a for x. However, it is easy to devise cheap algorithms that obtain part of the benefit. The current version of algorithms that obtain part of the benefit. The current version of FIE requires Rl and R2 above to be identical. FIE requires Rl and R2 above to be identical. This is usually This is usually sufficient, because they are typically (e.g., in the case of branch sufficient, because they are typically (e.g., in the case of branch conditions from conditional statements) single literals immune conditions from conditional statements) single literals immune from substitution. from substitution. 6.2. Su bsumption Next, FIE checks to see if current is subsumed by any clauses in o/d. In general, FIE deletes any clause subsumed by a known clause. Thus FIE should recognize Cl as subsuming C2 in just those cases where a person who knows Cl will consider C2 as redundant. To test whether clause Cl subsumes clause C2, the inequality literals of C2 are first used to rewrite Cl as in equality canonicalization. This, in combination with equality canonicalization allows (P tl) to subsume any clause (regardless of equality ordering) of the form “if tl = t2 then (P t2)“. The resulting clause Cl’ subsumes C2 if it has no more literals than C2 and some substitution maps each literal of Cl’ to a literal of C2 (a standard definition). In particular, a clause does not subsume its factors. Factors are often not obvious to humans, and thus constitute interesting results. 6.2.1. Reordering Arguments When a relation is intuitively commutative, such as the Spouse relation, the commutative variants of known facts cease to be interesting. The user can declare a predicate to be intuitively symmetric (and other properties corresponding to permuting arguments), so that FIE will consider the variants to be “obvious” consequences of each other. The subsumption algorithm computes the variants of each literal and accepts a substitution for any of them. Perhaps other common properties would also be worth recognizing, but we have not had to deal with them yet. 6.2.2. Uniqueness Properties We have described some additions to a large bag of previously known tricks for dealing with equality. The way FIE deals with commutativity is important for its application, but nothing new. In contrast, uniqueness does not seem to have been much studied. We feel we have made progress in building an understanding of uniqueness into FIE. In each case, the ability to discard a result that is too easily derived requires compensation (described later) the component that finds new consequences must be strengthened to avoid losing the consequences of what has been discarded. When a relation is intuitively single valued, such as the Location relation, negative instances become uninteresting in the face of positive instances, If we know where an object is. there is no need to list all the other locations as places where it is not. The user can tell FIE that he understands certain uniqueness properties of a predicate. (In the case of symbolic execution, this information is already in the specification and the user need not restate it.) The uniqueness properties are of the form: V x,y,z,u,v (P<uu> A P<zv>)Iu = v where x,y and g represent vectors of variables distinct from each other and from u and v (y and I of the same length), and <mu> is some permutation of the variables in the concatenation of x, y and (the single variable) u. A given uniqueness declaration must specify the predicate P, the permutation < > and the size of x. The literal P<&c> is considered to subsume the literal -P<deD (here we use c and f to stand for terms and 2, b, d and .e as vectors of terms) if there is a substitution 8 which maps _a to d and 8 maps c to a term known not to be equal to f. (The current implementation just checks whether (= c f) simplifies to False. We have seen cases where this was inadequate.) 58 6.3. New Facts Simplify Old If current (the clause being integrated) is still not simplified, it is used to simplify all the other clauses of old. Any clauses that are rewritten (where we regard subsumption as rewriting to True) are removed from old and the new version is put into the list of clauses to be added. Finally current is inserted into old. 7. Deriving New Consequences In the final phase, current is combined with each clause of old to find new results. This is done with two rules, both closely related to binary resolution. Note that if clauses Cl and C2 are known to be true, and clause C3 is a resolvent of Cl and C2, then C3 must be true. 7.1. Resolution The major difference between normal resolution and the first rule (which we will refer to simply as resolution) stems from our interest in equality. In normal resolution, it is impossible to resolve P(a) with -P(b). In our case this is allowed, with the result of -( = a b). Deriving inequalities may seem odd from a theorem proving point of view, but the results can be interesting to a person. They also serve as a communication mechanism in symbolic execution: the distinctness of two objects may be important at one time, but only derivable at another time. In general, from the clauses P V (R t, . . . t”) and Q V -(R u, . . . un) (where ti and ui are terms and P and Q are clauses), we derive P V Q V -( = t, u,) V . . . V -( = tn un). Notice that the substitution is stored in the inequality literals of the clause (which, of course, tend to simplify). 7.2. Uniqueness Resolution The other rule uses the uniqueness information described above, Given that box1 is at N.Y. and box2 is at L.A., it directly derives that box1 and box2 are distinct. Also, given that Joe is at location1 and that Joe is at location2, it directly derives that location1 and location2 are identical. In fact, given a uniqueness declaration, an explicit axiom, such as -(Lot x y) V -(Lot x z) V ( = y z) adds very little in terms of interesting consequences. From the clauses P V R<au> and Q V R<sv> uniqueness resolution derives PVQV-(= xdV(= uv), where -( = x.d means -(.= x, w,) V . . . v -(= XmWm)’ In the case of Location, y and z are empty, x and w are the terms representing objects and u and v are the terms representing locations. (Lot box1 N.Y .) combines with (Lot box2 L.A.) toyield -(= box1 box2) V (= N.Y. L.A.) which simplifies (assuming L.A. and N.Y. are known to be distinct) to -(= box1 box2) A more impressive example: (Lot Xbox N .Y. ) V (= box1 xboX) every box otherthan box1 is at N.Y. ( Lot box2 L. A. ) yields by uniqueness resolution -( q box2 xbox ) V (= N.Y L.A.) V (= box1 xbox) which simplifies in two steps to (= box1 box2) The reordering properties of predicates (e.g., commutativity) are used in resolution (and uniqueness resolution) in the same way as in subsumption. 8. Filtering New Consequences The two rules above can, of course, generate many new consequences. Some of these will be recognized as closely related to known facts, but in general this is not sufficient to prevent an explosion in the number of clauses. FIE adopts a very simple (and severe) strategy to ensure termination: it considers any resolvent that is more complex than either parent to be uninteresting. Higher complexity is defined as greater nesting of functions*** or more literals**** Different “versions” of FIE, corresponding to a tradeoff between power and selectivity may be obtained by varying some implementation parameters. The first of these is where to draw the boundary between “more literals” (uninteresting) and “fewer” (interesting). We have tried three solutions: - the result parent -the result must contain literals as one parent strictly than one number of . the result must either contain strictly fewer literals, or the same number of literals but strictly more equality (inequality) literals than one parent. The standard setting for symbolic execution has been the third. How much the results differ, and whether the difference is for better or worse depends on the problem. The other, and perhaps more interesting parameter, is how much simplification is done before deciding whether a result is interesting. FIE’S results would be fairly predicatable (and dull from a theorem proving point of view, though perhaps still interesting to the user) if the decision were made directly on the results of resolution. Often, however, complex results simplify enough to be considered “interesting”. So far, we have only processed resolvents in isolation before deciding whether to keep them, but we have seen cases where interactions with other known clauses would have allowed resolvents to be kept. The most complete version (classified as future work) would be to keep “uninteresting” clauses for simplification, but not resolve them unless (until) they were simplified to the point of “interestingness”. It must be mentioned that FIE still can not guarantee a small number of consequences. Knowing of n boxes at different locations will generate n2 inequalities, all of which FIE considers interesting. Actually, it is proper to consider as interesting the fact that these n boxes are all distinct. The “problem” is that predicate calculus cannot express that fact succinctly. One could imagine building a new representation for such a fact, extending the subsumption algorithm to recognize it, and building a special inference mechanism to use it. This would be a useful addition for some domains, but other forms of the same problem would remain, such as transitivity: given the axiom that a relation R is transitive, a set of axioms of the form (R ai ai7 ,) implies all results of the form (R ai ai) for i<j. to generate new knew how to add (P 1) would have l l l *We mention in passing that the complexity cutoff can be programmed to some extent by altering the input clauses. For example, given a clause C containing the literal L, and another clause D which does not contain a literal that unifies with L, one can disjoin L to D to effectively raise the complexity limit without losing termination. 59 We have not tried to deal with the problems above (or related problems). This is partly due to the fact that in exploring sets of axioms (including symbolic execution, where individual examples are normally small), one rarely needs many instances: in order to explore the axioms of ordering, we would probably consider three or four objects, not twenty. However, other problems have arisen in practice. These have been solved without new representations or inference mechanisms. 8.1. Conditionality, again In symbolic execution, a conditional statement (If P then Sl else S2) can be intuitively understood as two possible worlds. There is one set of theorems of the form -P V Ci (for results of Sl) and another set of the form P V Cj (for results of S2). Resolving on P produces a cross product of clauses. These always seem uninteresting - they intuitively amount to the case split: P V -P. The symbolic evaluator generates a new literal, L, meaning “the THEN branch was taken”. (It needs a way to refer to that bit of history anyway - the expression P is insufficient since it refers to mutable domain relations.) FIE discards any result of resolving on L unless it has strictly fewer literals than one of its parents, This accepts consequences that can be derived independent of L, and if L can be proven True or False it allows the appropriate set of clauses to be deconditionalized (and the others to be subsumed). In another setting, the user could tell FIE which literals intuitively correspond to case splits. 8.2. Skolem functions FIE filters out a large class of consequences that contain skolem functions. In essence, the skolem functions offer alternative representations for certain facts. We prefer to represent these facts only once in the original (and more natural) way. As an example, consider the’ clause that every box has a location, (Lot x (fx)). This clause is useful, e.g., if we find a box with no location it would be nice to notice the contradiction. However, it interacts with -(Lot box1 L.A.) to yield -( = L.A. (f boxl)), which a person would consider redundant. Given that locations are unique, (Lot box1 N.Y.) implies (= N.Y. (f boxl)). As another example, given that every person has a Gender, (Gen x (gx)), and that spouses cannot share a gender, we can derive -(Sp x y) V -(Gen y (gx)), which is hard to explain in English in any terms other than the original axiom, that spouses cannot share a gender. FIE discards all of these (and other similar) results. 9. Related Work FIE resembles Eurisko [Lenat 831 (and AM [Lenat 761) in that it searches for interesting extensions to an initial knowledge base. However, Eurisko is meant to find deep results in many hours of exploration, whereas FIE is meant to quickly point out a few consequences that the user should probably know. Furthermore, Eurisko’s results are empirically justified conjectures, whereas FIE’S results are theorems. Forward inference is quite rare in programs with general theorem proving capability, except on a special case basis. Programs like [Nevins 751 use forward inference rules whose form is carefully designed to generate certain types of results. On the other hand, [Bledsoe 783 provides various types of forward reasoning as user options, with no guarantee that they will lead to reasonable behavior - the responsibility belongs to the human user. [Cohen 811 is much closer in spirit to FIE in that it takes total responsibility for deciding which inferences to make. However, it uses forward reasoning purely to integrate new knowledge into its own database. It is not trying to interest an external user. FIE has strong ties to the large body of work on resolution theorem proving [Loveland 781. It uses clause representation, and resolution is its primary rule of inference [Robinson 651. Also, we share with much of this work an emphasis on techniques for recognizing and deleting useless or redundant information, e.g., canonicalization and subsumption. IO. Conclusions FIE automatically generates interesting consequences from a set of input axioms. One measure of success is that the symbolic evaluations we have tried have reported nearly all the results we expected, and some that were not expected. FIE works hard to avoid uninteresting consequences. The notion of interestingness is heavily dependent on context. In particular, a fact is considered uninteresting if it is too easy to derive from other known facts. FIE finds mostly shallow consequences, but finds them quickly. In the longest symbolic execution to date of a Gist specification, the average call to FIE integrated 10 new clauses with 23 old ones to yield 27 clauses in less than 10 CPU seconds on a VAX750 running Interlisp. FIE has been used successfully in a symbolic evaluator. The specified domains have included a file system, a package router, a world of people (marriages, children, etc.), and a world of ships (cargoes, ports etc.). In the future we hope to adapt it to other purposes (e.g., debugging and explaining database schemas). Acknowledgements: This work was done in the context of a larger effort by the Gist group at ISI. Also. the presentation of this paper was greatly improved by the suggestions of members of the group, especially Jack Mostow. References [Bledsoe 781 W. W. Bledsoe and Mabry Tyson, The UT Interactive Prover, University of Texas, Technical Report, 1978. Math. Dept. Memo ATP-17A [Cohen 811 Donald Cohen, Know/edge Based Theorem Proving and Learning, UMI Research Press, 1981. [Cohen 831 Donald Cohen, “Symbolic Execution of the Gist Specification Language,” in IJCAI, 1983. [Lenat 761 Douglas Lenat, AM: An Artificial Intelligence Approach to Discovery in Mathematics as Heuristic Search, Ph.D. thesis, Stanford University, July 1976. [Lenat 831 Douglas B. Lenat, “EURISKO: A Program that Learns New Heuristics and Domain Concepts; The nature of Heuristics III: Program design and results,” ArtificiaI intelligence 21, (1,2), March 1983, 61-98. [Loveland 781 Donald W. Loveland, Fundamental Series in Computer Science. Volume 6: Automated Theorem Proving: A Logical Basis, North-Holland Publishing Company, 1978. This is cited only as one representative of a large body of work on resolution theorem proving. [Nevins 751 Arthur J. Nevins, “Plane Geometry Theorem Proving Using Forward Chaining,” Artificial intelligence 6, (l), 1975, l-23. [Robinson 651 J. A. Robinson, “A Machine-Oriented Logic Based on the Resolution Principle,” JACM 12, (I), Jan. 1965, 23-41. [Swat-tout 831 Bill Swarout, “The GIST Behavior Explainer,” in NCAI. 1983. 60
1984
14
297
FOCUSING IN PLAN RECOGNITION Norman F. Carver, Victor R. Lesser, Daniel L. McCue* Department of Computer and Information Science The University of Massachusetts Amherst, Massachusetts, 01003 ABSTRACT A plan recognition architecture is presented which exploits application-specific heuristic knowledge to quickly focus the search to a small set of plausible plan interpretations from the very large set of possible interpretations. The heuristic knowledge is formalized for use in a truth maintenance system where interpretation assumptions and their heuristic justifications are recorded. By formalizing this knowledge, the system is able to reason about the assumptions behind the current state of the interpretations. This makes intelligent backtracking and error detection possible. I INTRODUCI’ION An important issue for plan recognition in large search spaces is how to rapidly and accurately recognixe the current plan based on the observation of a small number of plan steps. The need for this type of plan recognition has arisen in the POISE intelligent user interface system [3, 6, 7J. Hierarchies of plans are used to specify typical combinations of user actions and the goals they accomplish. By recognizing a user‘s actions in the context of this model of possible actions, POISE is able to provide intelligent assistance to a user (e-g., agenda management, error detection and correction, and plan completion). Figure 1 describes a simple POISE plan that could be used as part of an intelligent assistant for an office automation system. Plan recognition is a complex task since the recognizer may be forced to keep a very large number of plan interpretations under active consideration because of: 1) concurrency in user activities (i.e., loose constraints on the temporal ordering among plan steps); 2) sharing of plan steps among alternative plans; 3) the possibility that any partial plan might be continued by This research was sponsored, in part, by the External Research Program of Digjtal Equipment Corporation and by a grant from Rome Air Development Center. *Digital Equipment Corporation Continental Blvd. (MKOl-2GO9) Merrimack, New Hampshire, 03054 future user actions; 4) insufficient constraint information acquired from the observation of a small number of user actions to disambiguate among alternative interpretations. A key problem in the design of such a plan recognixer is how to rapidly and accurately reduce the number of active plan interpretations in order for the system to be efficient and provide the most assistance to the user. The descri ‘ens of procedures/plans are specified in a formal language [l p” as depicted here: ! PROC ! DESC 2 IS ! COND ! The plan name and its parameters Complete-Purchase (Amount, Items, Vendor) A textual description of the plan When the invoice is received, check with the requester to find out if the goods have been accepted and should be paid for or if they have been returned and so should be canceled. The steps involved and their relative ordering Receive-Invoice ’ Check-Goods ’ (Pay-For-Goods I Cancel-Goods) The cmstmints on the plan sttps and objects (Check-GoodsStatus = qtc%&ved”) <=> WILL-EXIST Pay-For-Goods (Check-GoalsStatus = qwlxrmd”) <=> WILL-EXIST Cancel-Goods Receive-Invoice&em = Check-Goods&ems Receive-Invoiceltems = Pay-For-GoodsJtems OR Cancel-GoaMtems A definition of plan parameters Amount = Receive-Invoice Amount Items = Receive-InvoiceItems Vendor = Receive-Invoice.Vendor z~plan is one in a hierarchy of plans *for purchasing in an The plan, Complete-Purchase, consts~ of three steps of which’ the final sten is either Pav-For-Goods or Cancel-goods. clause stateme;lts. Additional COND clause statements umstmin the parameters of the subplans (e.g., the item being paid for must be the one referred to in the invoice). WhiIe this plan dots not make use of it, the Is chmse language provides for concurrency with the shufffe operator which leaves the relative ordering of the plan steps unqecified. The steps of concurrent toplevel plans are implicitly shuffled W. Figure 1 - POISE Complete-Purchase plan 42 From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. Our solution to this problem has been to develop a Plan recognition architecture in which application-specific heuristic knowledge can supplement the constraint information in the plans. This heuristic knowledge is used by the focusofcontrol strategy to quickly focus the search to a small set of plausible interpretations from the very large set of possible interpretations. The heuristics deal with the relative likelihood of alternative plans, the likelihood that plan steps are shared, and the likelihood of continuing an existing plan versus starting a new plan. The heuristic knowledge has been formalized for use in a reason maintence system [4]. A formal system for representing heuristic assumptions has advantages for plan recognition systems. It becomes possible to reason about the assumptions behind the current state of the interpretation and why they were made. When new information is acquired which contradicts the current interpretation, the system can use this reasoning ability to recognize that the user has made an error or that the system has made an interpretation error. If an interpretation error has been made, an intelligent backtracking scheme can be used to decide what assumptions led to the invalid interpretation, how to undo these assumptions, and how to integrate the new information. This approach has the added benefit of allowing the system to explain to the user why it believes the user is carrying out a particular plan. We believe that this approach not only applies to plan recognition, but to complex interpretation systems in general for it address issues which must be faced by any such system: The ability to exploit heuristic knowledge for control. The use of an intelligent focus-ofcontrol strategy which can reason about context and the relationships between competing and cooperating interpretations when integrating new information. Adaptability to different applications and environments with changes to the heuristic control knowledge alone (i.e., without requiring changes to the underlying reasoning system). The use of an intelligent backtracking scheme. The capability of explaining its reasoning to users. The remainder of the paper is organized into two SectiOnS. Section II discusses the plan recognition architecture and section III shows how the focus-f-attention strategy represents and uses the application-specific heuristic information to restrict the search. II PLAN RECOGNITION ARCHITECXLJRE The plan recognition architecture consists of five components as depicted and explained in Figure 2. This architecture implements the basic interpretation machinery which checks syntactic and semantic validity of possible plan instantiations, i.e., plan step temporal orderings and attribute contraints. It also permits use of heuristic focusofcontrol by providing a context mechanism. This allows interpretations which were previously considered unlikely to be recovered if new information invalidates the current likely interpretations. When the monitor tracks a user action, it initiates the recognition process by attempting to integrate the user action into existing interpretations as follows: An instantiation of the primitive plan which represents the user action is placed on the instantiation blackboard. If a plan of this type was expected by any interpretation entries in the focus set, the monitor expands the predictions from the higher-level instantiation towards the instantiation of the action. This generates a hierarchy of instantiations on the prediction blackboard. When the predictions reach the level of the new instantiation, constraint values are checked. If the constraints are satisfied, the monitor integrates the user action instantiation into the higher-level plan instantiations. This is done by “abstracting” from the action instantiation up to the plan level of the predicting instantiation. The uabstraction” Pro== creates Plan instantiations and propagates constraint values. ‘Ihe new and the predicting instantiation structures are copied’ and merged into a single structure which represents a new interpretation. The syntactically and semantically valid continuations of existing interpretations are ’ This copy permits constraint values to be propagated and then easily retracted if the system decides to backtrack (i.e., revise prevmus decisions about which higher-Ieve structure an mstantiation is part of.) 43 posted to the focusing system, as “can Continue” facts (see section III). PLAN SEMANTIC LIBRARY DATABASE MONITOR AGENDA INSTANTIATION BLACKBOARD PREDICI’ION BLACKBOARD l MONITOR - the interpreter which tracks user actions and system events, instantiates pIans corresponding to those actions and events, and propagates constraint information? l INSTANTIATION BLACKBOARD - a data structure where the monitor records potential interpretations of user activities as hierarchically related sets of partially instantiated plans l PREDICTION BLACKBOARD - a data structure where the monitor “simulates” various predictions of future actions . MONITOR AGENDA - a prioritized list of possibfe next steps for the monitor to take to expand interpretations. The monitor selects actions from this agenda to expand the interpretations that it is currently pusuing (i.e., its current %est” interpretations). The action of constructing an interpretation on the blackboard triggers the addition of new actions to the monitor agenda. If all agenda actions were executed, ah possible (valid) interpretations would be generated. . FOCUSOF-ATTENTION DATABASE - a data structure where the monitor records its interpretations of user actions and the assumptions it made to arrive at those interpretations. The partial pIan instantiations xseg=;he current best interpretations are listed in . l PLAN LIBRARY - contains data structures describing the plans. The monitor makes use of the temporal specifications of the subpIans to predict the next steps in the plans and makes use of the constraint specifications when propagating attribute values between plan instantiations. 0 SEMANTIC DATABASE - maintains a representation of the state of objects in the users world. Figure 2 - Plan Recognition Architecture 2 A full discussion of the constraint propagation mechanism is contained in [7]. l The monitor also checks to see if the user action could be the start of a new activity. It scans the plan library to determine whether or not the user action could syntactically start a new high-level plan. l If a new plan could be started, the monitor “abstracts” from the action instantiation to higher-level plans, checking constraints and propagating constraint values until a top-level plan is reached. The plans which the user action can syntactically and semantically start are posted to the focusing system as “can STart” facts (see section III). To better understand how the basic plan recognition system and heuristic focusing interact, consider the following example. Assume that the interface is monitoring a purchase activity in progress. The state of the focus set and instantiation blackboard are as depicted in Figure 3. The pm Complete-Purchase (see Figure l), has as its first step, Receive-Invoice, whose only sub-step is a user action represented by the primitive plan, Receive-Information. The next step of Complete Purchase is Check-Goods whose only sub-step is the plan, Request-Information. Request-Information consists of the primitive plan, Send-Information, followed by the primitive plan, Receive-Information. The partial interpretation of a Complete-Purchase plan is on the instantiation blackboard. The focus set indicates that this Complete-Purchase plan instantiation is the most likely current interpretation of the actions seen so far. A Send-Information action has just occurred and its plan has been instantiated on the instantiation blackboard. This triggered the creation of an agenda entry, AEO9, which, if executed, would generate the next level of abstraction for the given action (Send-Information.7). The sequence of actions leading up to this point has triggered the creation of other agenda entries some of which are pictured in Figure 3. Note that most agenda entries will not be executed if the focuser is correctly tracking user actions. The monitor compares the instantiation, Send-Information.7, to the types of actions expected by the partial interpretations in the focus set. The monitor considers only those interpretations for the action which were expected by focus set interpretation entries. Through the use of precomputed tables, the monitor detects a match between the plan type, 44 Send-Information, and starting sub-step plan types of the plan, Check-Goods, which is expected next. The focus set entry also contains references to actions on the monitor’s agenda which could be executed to generate the appropriate predictions for Check-Goods. The monitor instantiates those predictions on the prediction blackboard, removing the actions from the agenda. Constraint values are propagated down through the prediction instantiations shown in Figure 4. When the level of predictions meets the level of the instantiation representing the user action, constraint values are compared. In this case, Send-Information.19 has compatible parameter values with Send-Information.7. The action meets the expectation of the partial interpretation in focus. This example shows how focusing helps control the potentially explosive set of interpretations of primitive plans such as Send-Information by limiting the interpretations which are considered. In Figure 5, the action, Send-Information.7 has been integrated into the partial interpretation. The abstraction process has created plan instaniations and propagated constraint values up to the level of the original prediction. Note that in Figure 4, the top instantiation in the interpretation structure is Complete-Purchase.6. The new instance of Complete-Purchase (Complete-Purchase- was copied from Complete-Purchase.6 when Check-Goods24 was integrated. As previously noted, this copy operation permits the system to revise its decision that Check-Goods24 should be part of Complete-Purchase.6. Focusing also eliminates the previous interpretation of Receive-Information.1, Complete-Purchase.6, from further consideration. While it might still be a valid interpretation (Send-Information.7 should really be interpreted differently or could be an error by the user), the focusing heuristics make it less likely. If the focus set needs to be revised because later actions cannot be interpreted within the existing interpretation structure, the interface! will reuse these “discarded” focus entries to pursue interpretations previously thought to be “unlikely.” Current best: Complete-Purchase.6 Predictor: Next step: CompIete-Purchase.6 Check-Goods Agenda entry: AEO6 hi=& AEO6 Predict Check-Goods from Complete-Purchase6 bhudiation Blackboard Complete-Purchase.6 Receiy-Invoice3 . . . AE09 Abstract Rcqucst-Infurmation from Send-Information.7 Receive-In&mation.l Send-Lnformatioa.7 Figure 3 Lwtandndon Blackboard I’rdlctlon Black- Agcab Gxyylctc-Purchase.6 Check-Goods.16 . . . AE4I9 Abstract Request-Information from Send-Information.7 Rcceiv -Lnvoicc3 I 2 Request-Information.18 Receive-Lnformation.1 Send-I&mation.l9 Send-lnformation.7 Figure 4 FIml Focus Set Iastantlat.lon Blnckboard Prcdktion BIaekboard gerr;znbcst: Complete-Purchas+ Complete-Purch 25 . Next step: Request-Information21 / 7 aeckiGd-16 Receive-Information Receive-Lnvoiee3 / Check-Goods24 \ Request-Information.18 Agenda entry: AEll Receive-Lnformation.l Request-Information21 Send- IJ ormation.19 \ 4Wh Send-Lnformation.7 . . . AEll Predict Receive-Information from Request-Lnformatioa21 Figure 5 45 III FOCUSING Focusing in POISE is a heuristic control mechanism which limits the interpretations of the user actions which the system considers to those interpretations deemed most likely. Heuristics about the likelihood of user actions are of three types: 1.) Application-specific - Derived from observations of the application environment. For example, ua single action is most likely to be taken to satisfy some part of a single goal.” That is, sharing a single low-level plan among several high-level plans is unlikely. Also, “initiation of a new top-level plan is a less likely reason for executing some action than completing some plan already in progress.” Since these heuristics are derived from a particular application (e.g., office systems), their relative importance (or usefulness) will depend upon the application. Other applications may have different characteristics. 2.) Plan-specific - The likelihood of specific actions being taken to satisfy particular goals (iz., as steps of particular high-level plans) may be specified as part of the plans. For example, “action A is more likely to be part of plan X than plan Y.” 3.) User Goals - The user may explicitly state goals (i.e., high-level plans with some or all of their parameters filled in) to be accomplished. Interpretations which match these explicit user goals may be considered extremely likely. The focusing strategy uses these heuristics to make assumptions about the most “reasonable” interpretations to be expanded. The heuristics represent a form of nonmonotonic reasoning known as reasoning by default. A reason maintenance system is used for recording and maintaining interpretation assumptions along with their justifications. Assumptions are retracted with a dependency-directed backtracking scheme which identifies incorrect assumptions and proposes better assumptions. Assumptions can be retracted without having to undo unrelated interpretation work because the effects of changing a particular assumption can be inferred. In order to formal.& the heuristic knowledge, the facts and assumptions are represented as follows: sT(kJ9 - User action k can STart plan X. WJW - User action k can Continue plan instantiation Xu. WWW - A Possible Explanation for user action k is as part of plan instantiation Xu. User action k is SHared by plan instantiation Xu and Yv (i.e., the action fulfills parts of two goals). User action k has GreaTeR likelihood of fulfilling part of plan instantiation Xu than of plan instantiation Yv. A Most Likely Explanation for user action k in “state” w (i.e., after considering the sequence of actions, w) is that it is part of plan instantiation Xu. The interpretation entries in a focus set are the plan instantiations in the union of the MLEs in a particular state. A formal definition of Most Likely Explanation: IF PE(k,Xu) AND NOT [PE(k,Yv) AND GTR(k,Yv,Xu)] THEN MLm%kw. where k stands for a user action, u, v, and w are sequences of user actions, X and Y are high-level plans, and plan instantiations are represented as Xu for the high-level plan underway (X) and the actions which partially fulfill it (u). The heuristics are formalized as inference rules over these facts and assumptions. Three of the application-specific heuristics developed for the office application are: Hl - A single user action is most likely to be taken to satisfy some part of a single goal so it is unlikely that the input action be shared by multiple high-level plans. This is implemented by stating that if no explicit assumption has been made that an input is shared by multiple interpretations, assume that it has not been shared: IF PE(k,Xu) AND PE(k,Yv) AND CONSISTENT(NOT SH(k,Xu,Yv)y THEN NOT SH(k,Xu,Yv). H2- Initiation of a new top-level plan is a less likely reason for executing an action than completing some plan already in progress: IF C(k,Xu) AND PE(k,Xuk) AND ST(k,Y) AND PE(k,Yk) AND NOT SH(k,Xuk,Yk) AND CONSISIENT(GTR(k,Xuk,Yk)) THEN GTR(kXuk,Yk). 3 See [8] for an explanation of the nonmonotonic modal upcxator CONSISTENT. H3 - Of two Possible Explanations for an input action, if one is assumed to be a Most Likely Explanation for a later user action then it has a GReaTer likelihood of being the Most Likely Explanation for this input: IF PE(k,Xukj) AND PE(k,Yvk) AND NOT SH(k,Xukj,Yvk) AND MLE(w,j,Xukj~ AND CONSISTENT(GTR(k,Xukj,Yvk)) THEN GTR(k,Xukj,Yvky The focusing algorithm is as follows: l Use the “can Continue” and “can STart” facts posted by the monitor during the basic interpretation process (see section II) to post “Possible Explanation” assumptions for the user action. l If there are no “Possible Explanations” for the action, either an interpretation error or a user error has occurred. Backtrack through the assumptions which have been made to find one which could result in an alternate interpretation (previously disregarded because it seemed unlikely) which would explain the current action. Retract this assumption, revise the interpretation of the previous actions and repeat the interpretation process for the current action. l If an action has more than one “Possible Explanation,” apply the heuristic rules to generate relative likelihood assumptions. l Post the resulting “Most Likely Explanations” for the user action and propagate the results of this latest interpretation back to earlier interpretations. To understand the focusing algorithm more fully, an example is given in Figure 6. The example considers only the syntactic interpretation of actions, ignoring constraint information. Since there is only one possible interpretation for the first user action, a, the focusing process is straightforward. The “Most Likely Explanation” for action a (see fact 3) is the only interpretation in focus. The second user action, b, can be interpreted as continuing the plan begun by action a or as the start of a new plan (facts 5 and 7). One of the heuristics formalized above, that continuing an interpretation is more likely than starting a new plan (H2), results in interpreting this action as a continuation of plan X rather than the start of plan Y (see facts 9 and 10). This interpretation of action b results in a new “Possible Explanation” for action a being posted 4 Implicitly, w includes k and later user actions. s This is a slightly simplified version of the actuaI rule. (fact 11). This must be done because, although b was assumed to continue the partial plan including a, it is possible that a is meant to be “Shared” by two X plans (i.e., Xab and Xab’ where b’ is another b action which has not yet ocurred). When the third user action, c, is recognized, there is no way to interpret it within the existing interpretation structure (i.e., there is no fact of the form C(c,Zu) where Zu is some partial plan instantiation). This causes the system to backtrack, retract assumption 9 generated from the heuristic that continuing an interpretation is more likely than starting a new plan, and push forward the interpretation of plan Y started by action b (fact 14). User action c can now be interpreted as a continuation of plan Y (facts 15-17). Note that the interpretation of action c causes actions a and b to be reinterpreted (facts 22 and 23) because of heuristics Hl and I-D. N STATUS AND FUTURE RESEARCH The plan recognition system described is currently part of the POISE system. A special-purpose reason maintenance system was built. The additional knowledge provided by the focusing heuristics has been effective in the office automation application being studied. We are planning to explore its effectiveness in applications which are not as tightly constrained as the office. We are currently developing a more intelligent backtracking scheme. The current scheme locates the most recent assumption whose retraction allows the current action to be explained. A more intelligent approach invloves reasoning about the “quality” of interpretation assumptions and the resulting interpretations in order to identify the “best” assumption to retract or to recognize that the user has made an error. Another direction in which we are extending the current approach is the use of focusing heuristics which exploit the information about objects that is contained in the semantic database. These heuristics include knowledge about the use of objects in plans, e.g., the likelihood that plans share objects or the likelihood of creating new objects. 47 ACKNOWLEDGEMENTS Bruce Croft and Larry Lefkowitz have made contributions to the design and implementation of the plan recognition architecture as part of the POISE project. PI PI REFERENCES Bates, P., J. Wileden, and V. Lesser, “Event Definition Language: An Aid to Monitoring and Debugging Complex Software Systems,” Technical Report 81-17, Department of Computer and Information Science, University of Massachusetts, Amherst, Massachusetts, 1981. Croft, W. B. and L. Lefkowitz, “An Office Procedure Formalism Used for an Intelligent Interface ,n Technical Report 824, Department of Computer and Information Science, University of Massachusetts, Amherst, Massachusetts, 1982. PI PI PI [fd PI PI Croft, W. B., L. Lefkowitz, V. Lesser, and K. Huff, “POISE: An Intelligent Interface for Profession-Based Systems,” Conference on Artificial Intelligence, Oakland, Michigan, 1983. Doyle, J., “A Truth Maintenance System,” Artificial Inielligence, 12231-272. Gischer, J., “Shuffle Languages, Petri Nets, and Context-Sensitive Grammars,” Communications of the ACM, 24(9)597405. Huff, K. and V. Lesser, “Knowledge-Based Command Understanding: An Example for the Software Development Environment ,” Technical Report 82-6, Department of Computer and Information Science, University of Massachusetts, Amherst, Massachusetts, 1982. McCue, D. and V. Lesser, “Focusing and Constraint Management in Intelligent Interface Design,” Technical Report 83-36, Department of Computer and Information Science, University of Massachusetts, Amherst, Massachusetts, 1983. McDermott, D. and J. Doyle, uNon-monotonic Logic I,” Artificial In!elligence, l3:41-72. Plan Grammar: X = a b d... Y = b c... User Actions: a b c... The sets of facts and assumptions for each su cewsive T&ate” (string of user actions) arc: a ab before ab after backtrackina backtrackina 1. n(aS) 4. 2. PE(a,Xa) c@Xa) 5. PE ,Xab) 3. MLE(a,a,Xa) 6. ST ,y) 7. % PE ,Yb) Focus ?&+a} 8. -SH@,Xab,Yb) 4. c@JW 5. PE ,Xab) 6. ST ! ,Y) 7. PE ,Yb) 8. -SH(b,Xab,Yb) &7?HlJ 10. MLE(ab,b,Xab) l 14. MLE(ab,b,Yb) 1. 2. 11. 12. GTR(a,Xab,Xa) t2,ww3~ l3. MLE(ab,a,Xab) Focus Set-b) 13. MLE(ab,a,Xab) Focus Sct=jXab,Yb) abc 15. qab) 16. PE(c,Ybc) 17. MLE(abc,c,Ybc) 4. CCbSa) 2 7: E % %“’ PE :Yb) 18. PE(b,Ybc) 19. --SH@,fiSlab) WV-U 20. GTR@,Ybc,Xab) t5,17,lg@31 21. GTR@,Ybe,Yb) t7,17,18J-1 22. MLE(abc,b,Ybc) 23. MLE(abc,a,Xa) Focus Sct=(Xa,Ybc} Facts and assumptions are grouped by the user action to which they refer. only ‘Sn” [4] facts are represented and some obvious -SH assumptions have been ignored. The justifications for assumptions made using the three heuristics formalized in the paper are Thart assumptions c K ’ en in braces ({ )) below the assumptions with the three heuristics denoted as Hl-H3. $ed durin backtracking are denoted with l k C and ST facts arc posted by the monitor as described in stct~on II. t-h e various bookeeping ruIes. remainder of the facts rcsuIt from the application of Figure 6 - Focusing in a Sample Grammar 48
1984
15
298
A SELF-MODIFYING THEOREM PROVER Cynthia A. Brown GTE LABORATORIES INCORPORATED 40 Sylvan Road Waltham, Massachusetts 02254 Abstract: Theorem provers can be viewed as containing declarative knowledge (in the form of axioms and lemmas) and procedural knowledge (in the form of an algorithm for proving theorems). Sometimes, as in the case of commutative laws in a Knuth-Bendix prover, it is appropriate or necessary to transfer knowledge from one category to the other. We describe a theorem proving system that independently recognizes opportunities for such transfers and performs them dynamically. Theorem proving algorithms Theorem provers can be divided into two general classes: those that operate without human intervention to prove straightforward consequences of a set of axioms, and those that serve as a mathematician's assistant in the search for a proof of a mathematically significant theorem. The first type of prover is needed for program verification and artificial intelligence applications, where the necessity of human intervention would severely disrupt the intended application. The second type of theorem prover ususally contains one or more of the first type. It is the first type of prover that we are concerned with. Our model of a theorem prover is thus an algorithm that establishes the truth of a statement by showing that it is a logical consequence of a given set of axioms. In the process of establishing that truth, the theorem prover may obtain intermediate results that play the role of lemmas. The power and efficiency of the theorem prover depend on the algorithm that is employed (and there is often a trade-off between these two characteristics of the system). Two major classes of theorem proving algorithms are the resolution-based methods [ROB65,OVE75,BOY71] and Knuth-Bendix type methods [KNU70,HUE80,HUE80b,JEA8O,MUS8O,PET81, STI81]. There are several ambitious theorem provers of the second type that incorporate one or the other of these methods; for example, the Affirm system [TH079] includes a Knuth-Bendix prover, and the ITP theorem prover [MCC76,OVE75,LUS84] uses hyperresolution, an efficient form of resolution, along with other techniques. Both the Knuth-Bendix algorithm and resolution methods can also be used as the basis of an algorithmic theorem prover. The Knuth-Bendix method is usually more efficient in the cases where it applies, but resolution is much more general. Declarative versus procedural knowledge in theorem provers Both of these basic approaches to theorem proving can viewed as, in a very general sense, expert systems: they operate on a data base of explicit knowledge (the axioms) using an inference engine (the theorem-proving algorithm) to solve a problem. This analogy cannot be pushed too far, but it does lead to some interesting questions about the structure of theorem-proving systems. In particular, the debate about the role of declarative versus procedural knowledge has a very real application in the theorem-proving field. Certain information in a theorem proving system can be represented explicitly (as axioms) or incorporated into the inference mechanism. There? are at least two important examples of thiq phenomenon. One is the use of paramodulation [GL080,OVE75,WOS70,WOS80] in resolution-type theorem provers. The basic resolution algorithm makes no special provision for the use of the equality predicate in the statement of axioms. To introduce equality it is necessary to provide axioms that specify its properties. Chang and Lee [CHA73] give a set of ten such axioms, which establish that equality is reflexive, symmetric, and transitive and allow equals to be substituted for equals in any expression. To use such axioms in the proof of a theorem would obviously lead to an extremely inefficient proof. The alternative is to build into the theorem-proving algorithm the knowledge required to handle equality appropriately. The inference rule that is used for this purpose is called paramodulation. Adding paramodulation to a resolution theorem prover results in an ability to prove theorems about systems that contain equality in a natural, efficient way. Another example of the same phenomenon can be found in Knuth-Bendix theorem proving systems. These systems operate by deriving all the interesting theorems implied by a set of axioms. The axioms and theorems are expressed as rewrite rules. When the algorithm is successful, the result is an extremely efficient method of proving any additional theorems in that system. From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. Unfortunately, the algorithm can fail. The rewriting process on which it depends requires a partial order on terms of the system such that each rewrite results in a term that is smaller under the partial order than the one from which it was derived. It is therefore, for example, impossible to include axioms that express the commutativity of an operator, since there is no way to orient a commutative law consistent with a partial order. The solution is to deal with the commutative property of operators on the level of the rewriting algorithm, rather than treating it explicitly as a rewriting rule. The fact that an operator is commutative can be recorded; then, whenever the algorithm checks whether an expression involving that operator can be applied, it tries both ways of ordering the operands. For operators that are both commutative and associative, a more elaborate scheme is required. In this case a commutative and associative unification algorithm, which checks all possible ways of matching two expressions that involve nested applications of a commutative and associative operator, is used. Such algorithms are surprisingly complex: the problem is equivalent to finding certain partitions of integers [ST1811 Nevertheless, its use allows the application of Knuth-Bendix methods to a much broader range of systems than would otherwise be possible. To handle commutativity with a special unification algorithm is a necessity in Knuth- Bendix systems ; to deal with equality via paramodulation or a similar approach is a practical necessity in resolution systems. There are also cases of operators or properties that can be handled either declaratively or procedurally. An example is the idempotent property of an associative and commutative operator; there is an associative-commutative-idempotent unification algorithm, and idempotency can also be dealt with by explicit rewriting. Some investigators are attempting to develop special unification algorithms to deal with other common properties of operators in the framework of Knuth-Bendix systems [JOU83,JOU82,HUL80]. Dynamic recognition of commutative and associative laws A Knuth-Bendix theorem prover starts with a set of axioms and derives rewrite rules from them. If the prover is able to deal with commutative operators, then it will notice whether the main operator in an expression is commutative or not and apply the appropriate unification algorithm. Ordinarily the prover is told which operators are commutative at the start of its run. However, it is possible for the stated properties of an operator to imply that it is commutative without the commutative axiom being given explicitly. For example, here are the group axioms, written in additive notation: 1. 0+x = x 2. -x+x = 0 3. (x+y)+z = x+(y+z). These axioms can be completed by the Knuth-Bendix procedure to obtain a set of ten rewrite rules sufficient to decide whether any two expressions written in this system are equivalent. If we start with an additional axiom, 4. x+x = 0, the system will derive the fact that + is a commutative operation. At that point it would be necessary to restart the process, this time with the initial stipulation that + is commutative. The necessity of restarting the system is annoying and is contrary to the goal of avoiding the necessity of human intervention in the theorem proving process. (This goal is extremely important for practical program verification and artificial intelligence systems.) Fortunately, a commutative rule is easy to recognize. It is straightforward to have the prover examine any unorderable rewrite rule it has obtained (either as an initial axiom or during the proving process) to see if it is a commutative law. If a commutative law is discovered, that fact can be recorded and all future applications of that operator can be done using the commutative unification principle. If the operator is also associative, the more powerful associative-commutative unification can be used. It is also helpful to check the already derived rewrite rules to see if any can be simplified using the fact that the operator is commutative. One important unanswered question is the extent to which previous work must be redone when an operator is discovered to be commutative. All previously discovered rules must still be valid, but it is possible that some further rules may be discovered by reconsidering the earlier results. Until this can be resolved theoretically, the sys tern plays it safe by regenerating all relevant potential rules using the fact that the operator is commutative, and checks that they reduce to a common form. Implementation A Knuth-Bendix theorem prover embodying these ideas has been implemented in Prolog. At present it is able to recognize a commutative law and use the commutative unification algorithm on operators for which such laws have been discovered. It also recognizes explicit associative laws, and will use commutative-associative unification for operators that have both properties. If an operator that is known to be associative is found to also be commutative, the associative law is removed from the list of explicit rewrite rules belonging to the system. (No practical unification algorithm for the associative property alone is known.) As an example, consider the group theory axioms given above. The system completes the original three axioms by deriving rewrite rules in the following order: 39 4. -x+(x+y) = y 5. -0+x = x 6. --x+0 = x 7. --0+x = x 8. -0 = 0 9. --x+y = x+y 10. x+0 = x 11. x+(-x) = 0 12. --x = x 13. x+(-x+y) = y 14. x+(y-(x+y)) = 0 15. -(x+y>+(x+(y+z)) = z 16. x+(y+(-(x+y)+z)) = z 17. x+(y+(z-(x+(y+z)))) = 0 18. x-(y+x) = -y 19. x+(y-(z+(x+y))) = -z 20. x+(-(y+x)+z) = -y+z 21. -(x+y) = -y+(-x) Rules 5,6,7,9,14,15,16,17,18,19, and 20 are eliminated during the process by being simplified using later rules; the final set has the remaining ten elements. It is interesting to contrast this derivation with the one obtained by starting with the three group theory axioms plus the fourth axiom given above, stating that the group operation is idempotent. The following rules are discovered in order: 5. -x+(x+y) = y 6. x+(y+(x+y)) = 0 7. x+(x+y) = y 8. -0+x = x 9. -0 = 0 10. --x+0 = x 11. -x+0 = x 12. -x = x 13. x+0 = x 14. x+(y+(x+(y+z))) = 2 15. x+(y+(z+(x+(y+z)))) = 0 16. x+(y+x) = y 17. x+(y+(z+(x+y))) = 2 18. x+(y+(x+z)) = y+z 19. x+y = y+x At this point, the system recognizes that + is commutative and associative. Rule 3 is removed from the rule database, and the shift to associative- commutative unification for applications of the + operator is made. The discovery of rule 19, like the discovery of rule 21 in the previous example, causes many of the earlier rules to disappear. For example, it is no longer necessary to have rules showing that 0 is both a left and a right identity. This example demonstrates the ability of the theorem prover to modify itself in response to its discoveries. The extra checking involved consumes a negligible amount of time and greatly expands the capacities of the prover to handle sets of axioms without human intervention. Future work Many further improvements are possible on this system. As unification algorithms for other properties are discovered, the ability to recognize those properties can be built into the system, provided they can be easily recognized syntactically. Unfortunately, it is necessary to devise a new unification algorithm for each desired combination of properties, and this is no easy matter. Associativity and commutativity are probably the most generally useful properties, and a system that simply recognizes those (and thereby provides a good method for handling commutativity) should be most useful. A review of the currently available unification algorithms and their properties is given in [SIE84]. Also, while the commutative property is easy to recognize syntactically, there may be axioms present that imply the associative property and that lead to unorderable rules when the commutative law is applied to them. It would be helpful if the system could attempt to prove the associative law whenever it got into trouble with an unorderable rule for an operator known to be commutative and not known to be associative. In this way the order of discovery of properties would not be such a crucial factor in the success of the algorithm. (In general, it is a good strategy to postpone processing of unorderable rules in the hope that future rewrite rules can be used to eliminate them.) 40 Another area for research is the efficiency of the associative and commutative unification algorithm. The usual algorithm often generates many redundant possibilities; it should be possible to develop methods to recognize and avoid them. Finally, the Knuth-Bendix algorithm itself should be studied to identify potential speed-ups. Progress in this direction has already been made by Huet [HUE81]. References [BOY711 BOYER, R. S., "Locking: a restriction of resolution", Ph.D. Thesis, University of Texas at Austin, 1971. [BOY791 Boyer, R. S., and J. S. Moore, A Computational Logic, Academic Press, New York, 1979. [CHA73] Chang, C. L. and R. C. T. Lee, Symbo/ic Logic and Mechanical Theorem Proving, Academic Press, New York, 1973. [FAG841 Fages, F., "Associative-Commutative Unification", INRIA, Domaine de Volceau Rocquencourt, 78153 Le Chesnay, France, 1984. [GL080] Gloess, P. Y., and J. P. H. Laurent, "Adding Dynamic Paramodulation to Rewrite Algorithms", in 5th Conference on Automated Deduction, W. Bibel and R. Kowalski, eds., Springer Verlag LNCS 87, Berlin 1980, pp.195-207. [ GOG78] [ GOG80] [GUT781 [HER301 [HUE801 Goguen, J. A., J. W. Thatcher, and E. G. Wagner, "An Initial Algebra Approach to the Specification, Correctness, and Implementation of Abstract Data Types", in Current Trends in Programming Methodology, v. 4, R. Yeh, ed., Prentice- Hall (1978), pp. 80-149. Goguen, J. A., "How to prove algebraic inductive hypotheses without induction, with applications to the correctness of data We implementation", in 5th Conference on Automated Deduction, W. Bibel and R. Kowalski, eds., Springer Verlag LNCS 87, Berlin 1980, pp.356-373. Guttag, J., E. Horowitz, and D. Musser, "Abstract Data Types and Software Validation", CACM v. 21, no. 12, 1978, pp.1048-1063. Herbrand, J., "Investigations in proof theory: the properties of propositions", in From Frege to Goedel: A Source Book in Mathematical Logic, J. van Heijenoort , ed., Harvard University Press, Cambridge, Mass. Huet, G., and J. M. Hullot, "Proofs by Induction in Equational Theories with Constructors", Twenty-first Annual Symposium on Foundations of Computer Science, IEEE Computer Society, 1980, pp. 96-107. [HUE80b] Huet, G., and D. Oppen, "Equations and Rewrite Rules - A Survey", in Formal Language Theory, R. Book, ed., Academic Press, N.Y., 1980, pp.349 - 405. [HUE811 Huet, G., "A Complete Proof of the Correctness of the Knuth and Bendix Completion Algorithm," JCSS 23, 1981, pp. 11-21. [HUL80] Hullot, J. M., "Canonical Forms and Unification", 5th Conference Automated Deduction, W. Bibel and ? Kowalski, eds., Springer Verlag LNCS 87, Berlin 1980, pp.396-405. [JEA80] Jeanrond, H. J., "Deciding Unique Termination of Permutative Rewriting Systems: Choose Your Term Algebra Carefully", in 5th Conference Automated Deduction, W. Bibel and "R': Kowalski, eds., Springer Verlag LNCS 87, Berlin 1980, pp.335-355. [JOUST] Jouannaud, J. P., "Confluent and Coherent Equational Term Rewriting Systems: Application to Proofs in Abstract Data Types," Technical Report 83-R-005, Centre de Recherche en Informatique de Nancy, 1983. [JOUST] Jouannaud, J. P., C. Kirchner, and H. Kirchner, "Incremental Unification in Equational Theories", Proceedings of the Twentieth Annual Allerton Conference, [KNU70] [LUS84] [MCCOY] [MUSHY] [OVE75] [PET811 [ROB651 [ROB65b] [SIEVE] [sTI~I] [TH079] [WOS70] [WOSSO] 1982, pp. 396-405. Knuth, D., and P. B. Bendix, "Simple Word Problems in Universal Algebras", in COMPUTATIONAL PROBLEMS IN ABSTRACT ALGEBRA, J. LEECH, PP. 263-279. PERGAMON PRESS 1970, Lusk, E.L., and Overbeek, R. A., "A Portable Environment for Research ' Automated Reasoningtl, in 7th Conferen:: on Automated Deduction, R. Shostak, ed. I Springer Verlag LNCS 170, Berlin 1984, pp.l-42. McCharen, J. D., R. A. Overbeek, and L. was, "Problems and experiments for and with automated theorem-proving programs", IEEE Trans. Cornput., V. 25, No. 8, 1976, pp. 773-782. Musser, D. L., "On proving inductive properties of abstract data types", Proc. Seventh Annual ACM Symp. on POPL, 1980, pp. 154-162. Overbeek, R. A., "An implementation of hyperresolution", Comput. Math. Appl. 1, 1975, pp.201-214. Peterson, G. E., and M. E. Stickel, "Complete sets of reductions for some equational theories", JACM v. 28, no. 2, 1981, pp. 233-264. Robinson, J. A., "A machine-oriented logic based on the resolution principle", JACM 12(l), 1965, pp. 23-41. Robinson, J. A., "Automatic deduction with hyper-resolution", Internat. J. Comput. Math., 1, pp. 227-234. Siekmann, J. H., "Universal Unification", in 7th Conference on Automated Deduction, R. Shostak, ed., Springer Verlag LNCS 170, Berlin 1984, pp.l-42. Stickel, M. E., "A unification algorithm for associative-commutative functions", JACM, v. 28, no.3, 1981, pp.423-434. Thompson, D. H., ed., AFFIRM Reference Manual, USC Information Sciences Institute, 1979. wos, L., and G. A. Robinson, "Paramodulation and Set of Support", Proc . Sump- Automatic Demonstration , Versailles, France, 1968, Springer-Verlag, New York (1970), pp.276-310. Wos, L., R. Overbeek, and L. Henschen, "Hyperparamodulation: A Refinement of Paramodulation", in 5th Conference on Automated Deduction, W. 8ibel and R. Kowalski, eds., Springer Verlag LNCS 87, Berlin 1980, pp.208-219. 41
1984
16
299
Generalization Hearistics for Theorems Related to Recarsivcly Defined Fanctions S. Kamal Abdali Computer Research Lab Tektronix, Inc. P.O. Box so0 Beaverton, Oregon 97077 ABSTRACT This paper is concerned with the problem of general- izing theorems about recursively defined functions, so as to make these theorems amenable to proof by induction. Some generalization heuristics are presented for certain special forms of theorems about functions specified by certain recursive schemas. The heuristics are based upon the analysis of computa- tional sequences associated with the schemas. If applicable, the heuristics produce generalizations that are guaranteed to be theorems. INTRODUCTION This paper deals with the generalization of theorems arising from the analysis of recursive definitions. The theorems of concern here express the properties of functions computed by instances of certain recursive program w<hern&:: To prove these theorems, one usually needs to invoke some form of induction. However, one often encounters cases when induction fails. That is, it turns out that, as originally posed, a given theorem is too weak to be useable in the induction hypothesis for carrying out the induction step. In such cases, it is of interest to find a more general theorem for which induction would succeed. Manna and Waldinger [8] describe the theorem generalization problem, and mention the heuristic of replacing a constant by a variable and a variable by another variable. Aubin [l, 21 discusses in detail the method of attempting to replace some constant with an expression describing the possible values that the corresponding argument could assume. Another type of heuristics is exemplified by the method of Greif and Waldinger [5], in which the symbolic execution of the program for its first few terms is followed by pattern matching to find a closed form expression which generates the series so obtained. Much has been contributed to the generalization problem by Boyer and Moore [3, 4, lo]. In their work, expres- sions common to both sides of an equality are replaced with a new variable. This has turned out to be quite a powerful method in actual practice. Still Jan Vytopil BSO-AT P.O. Box 8348 3503 RH Utrecht The Netherlands other types of generalization methods are implicit in the work on the synthesis of program loop invariants such as [9, 12, 131, because there is a duality between the problems of finding loop invariants and finding theorem generalizations. Indeed, it is shown in [ll] that for certain classes of functions and theorems, these problems are equivalent. Finally, the more recent work on rewrite rule-based methods for prov- ing properties of recursively (equationally) defined functions (e.g., [6]) makes use of unification which can also be looked upon as a kind of generalization. Two methods of theorem generalization are presented below. These are applicable to two special recursive schemas, and are based on an analysis of the computational sequence of these schemas. The use of these methods requires the existence (and discovery) of some functions that satisfy ditions dependent on the schemas. certain con- NOTATION To express programs and schemas, we use the notation for recursive schemas given in Manna[7, p. 3191, with the following convention for letter usage: u-z for individual variables (input, output, and pro- gram variables), c for individual constants, F -H, f -h, and n for function variables, and p, q for predicate variables. Subscripts and primes may also be added to these letters. In general, these sym- bols stand for vectors rather than single items. Fol- lowing Moore [lo], we distinguish two classes of pro- gram variables: “recursion variables”, denoted by unsubscripted y or y subscripted with smaller integers, which are used in predicates (loop termina- tion tests), and “accumulators”, denoted by y sub- scripted with larger integers, which are used in form- ing results but not used in termination tests. GENERALIZATION SCHEME I Suppose we are given a recursive schema: Z =F(x,c),whcre (1) ~(y~,yz)+ irp0,1)then~zelseFV0,1),g011,~2)) From: AAAI-84 Proceedings. Copyright ©1984, AAAI (www.aaai.org). All rights reserved. Further suppose that for a given interpretation of c, p, f, g the schema computes a known function G. That is, for all x for which G(x) is defined, we would like to prove F(x, c) = G(x) (2) by 7; (.$‘gj...)) f j(x), i > 0, the expression in which there are i successive applications of f. Further, we denote by y f.‘) , yp) , i > 0, the values of the variables yl, y2 on the i-th iteration of (l), taking yf’), y$‘) to mean x and c, respectively. If, for a given argument x, the value of F (X , c ) is defined, then there must exist an inte er such P(Y o’MYf4,....Pcyfi)) c that each of is false and p(yfL+l)) is true, and the following equalities hold: YP’ 'f'(x), YP = 8 v i -Yx I,8 cf i -2(x L 8 (f (x 198 (x ? c )).-1) for OSiSA, and F(x,c) =yi’+l) = scf w, scf ‘-1w,*.., scf w sh 4)“‘)) Note that the depth of recursion, that is the value of A, depends only on x and not on c. Suppose we can find two functions hl and h2 with the property s(u, hlh 0 = hl(g(u, 4, h+, v, w))- (3) If we wish to generalize (2) by replacing c with h l(c, z ), then the final value of y2 will be 8 cf k (x 198 cf k -‘(xl * - - - s g (x 9 MC 9 z ND* Using (3) repeatedly to move hl outwards across g, we can rewrite this as hlt#+‘), (4) h2Cf ‘(x), yjk), h2v ‘-l(x), #-‘) , . - . , hzV 2(x 1, Y i2) 9 hdf (x 138 (x 9 c ), h2b 9 c 9 z )))) The first argument of hl, ylk+*), is equal to F (X , c), and the second argument of hl is the itera- tion of h2 with the first and second arguments having the same values as they have during the evaluation of F. Using the definition of F, we can define a new function H so that H (x, c , z) equals the second argu- ment of hl in (4). This H is defined as follows: H6%Y2d3) + if Poll) then Y3 else W h), gh, rz), h201, Y~,Y~)I The generalization of a theorem should be an expression (relation) which ‘0 we believe is in fact a theorem b) has the original theorem as an instance 4 is easier to prove. We propose F(x,hl(c,z)) =hl(G(x),H(x,c~)) (5) as a generalization of (2). The following result states that the condition (a) is satisfied, that is, (5) is indeed true. Theorem 1: Let f, g, G be some previously defined functions, and F, H be defined by the schemas FRY+ iIp011)theny2e~FVCy1),80,1,Y2)) HtYl* Y2,Y3) + ifdyWeny3 eifie H cf hh 8 01, YZ), h2691, Y2, Y3N If F(x, c) = G(x) and there exist functions hl and h2 satisfying S(b hlh 4) =hl(g(u,v),h2(u,v,w)) then F(x, hl(c, z)) = hl(G(x),H(x, c, z)) holds. II This theorem can be proved by first showing that under its conditions and definitions, it is the case that F(x, h&w)) = hl(F(x, w), H(x, w, z))- The condition Au, hl(b 4) =hddu,v),h2(~,v,w)) does not guarantee that the original theorem is an instance of the generalized theorem. In general, it is difficult to state how to derive A, and h2 so that the original theorem is an instance of the more general equation (5). Nevertheless, we can state a sufficient condition for it: Theorem 2: Under the definitions and conditions of Theorem 1, a sufficient condition that F(x,c)=G(x) l an l F (X , h l(c , z)) = hl(G’[x ), H (x , c , z )p?tk they: exist an r such that h&, r) = u and h2(u,v,r)=r (6) for all u and v. Ii 2 Although (6) is not a necessary condition, experience suggests that it is a natural and easily satisfied requirement. Also, although at present there is no systematic way of finding hI and h2 (if they exist at all), there are often natural candidates for these functions, such as addition, multiplication or append for hl and projection functions for h2. Note also that while we have only considered theorems of the type F (x , c) = G (x) for simplicity, the method can be used for some more complicated cases such as F (m (x ), A (x)) = G (x ). A more restrictive heuristic than that given by Theorems 1 and 2, but one which is particularly sim- ple to use, is described by the following: Theorem 3: Let f, g, G be some previously defined functions, and F be defined by the schema F&,Y+ ~~011)theny2e~F~~1),g~1,~3). If F(x,c) =G(x) (7) and there exists a function h and a constant r such that sw+9w)) = h(g(u,v),w) h(u, r) = u, for all u, then it is the case that F(x, h(c, z)) = h@(x), z)’ (8) Furthermore, (7) is an instance of (8). II Example 1: Let the function rev(x) be defined by ‘4Ylr Y3 - if null 0) *) then y2 else rev w 64, co?ls (cm bl), Y2))* When we try to prove the property rev (x , nil ) = reverse (x ) (9 by induction on x (reverse being the usual LISP func- tion), we find that the induction step cannot be car- ried through. To apply the generalization method given above, we observe that ~6’1) = nul%), f (~1) = cdr0,1), ShY2) = coMcdYl)rY2). We now look for a function h satisfying s(Urh(v,w)) =h(g(u,v),w) that is, com(car(u), h(v, w)) = h(cons(car(u), v), W) and also, for some r and all II, h(u,r)=u. The simple choice h (u , v) = append (u , v ) (with nil for r) satisfies the above conditions. The generaliza- tion of property (9) is thus found to be rev (X , append (nil , z )) = append (reverse (x ), z ), that is, rev (x , z) = append (reverse(x), z)- This property is provable by induction on x with the usual definitions of append and reverse. Example 2: Let F (~1, ~2, ~3) be defined by F(yl&,Y+ x~l=")tbeny3dsc F (yl div 2, ~292, if odd(yl) then Y293 else Y3) Here, yl is the recursion variable, and ~2, ~3 are accumulators. We have f &, YZ) = <YI div 2, Y~*YP go’l, Y2r Y3) = if dd6’1) then Y2*Y3 ek YS- We look for a function h which satisfies g(u,v,h(w,z))=h(g(u,v,w),z), that is, ifodd(u)thenv*h(w,z)ebh(w,z) =h(ifodd(u)tb env*w dsew,z). Also, there must exist an r such that for all u, h(u,r) =u. To satisfy these requirements, we can simply take h(u) v) = u *v (with r=l). It is now easy to see that the property F (xl, x2, 1) = xf’ generalizes to the induction-provable property F (xl, x2, z) = (x;‘)*z. GENERALIZATION SCHEME II Suppose we have to prove F(x, c, c’) CC(X), where F h Y29 Y3) * if kY2) the” Y3 e~wl~f (r2hcy29Y3)) (10) To prove this theorem, we need to carry out induction on y2. But this would be impossible because the initial value of y2 is constant. Therefore we must generalize (10) by replacing c with a more general term. In the previous heuristics, we replaced a constant c with a function h (c , z) and then tried to determine the influence of this change of initial value on the final result. It was easy to see how the change of initial value of an accumulator propagated through the entire computation, because of the limited role played by an accumulator in the function evaluation. It is harder to apply the same strategy to a recursion variable. The initial value of a recursion variable determines the depth of recursion, and at each level of recursion, the value of a recursion variable also affects the final result of computation. Conse- quently, the change in the initial value of a recursion variable has a much more complicated influence on the computation. So the function h must be chosen more carefully, and in fact our choice now is quite limited. A good strategy would be to replace c with an expression describing all possible values that this recursion variable can take on during the computa- tion of F (x , c , c ‘). Suppose we can derive the values of the recursion variable and the accumulator at the recursion depth z, say h (z ) and H (z), respectively. If (10) holds, then Fh h(z), H(z)) = G(x) would be a good generalization, likely provable by induction on z (that is, on the depth of recursion). To find suitable candidates for h(z) and H(z), we observe that F(x,c,c? =F(x,f(c),g(c,c?), if c#x, =F(x,f2(c),gV(c),g(c,c3)), if f(c)#x, = . . . = F (x , f i (c ), g cf i -‘(c ), 8 cf i -2(c ),m..,tt (c , c $.J)Xll) if iSid, = min u If i(c) =x}. Thus, H(i) =gui-1(c),g(fi-2(c), . . ., g(c,c?)). On the other hand, for is itin, we also have W’(x)) = W’(x), c, ~9))) = FCf'(c),f 'Cc), e&f'-'(c)r gV'-2(c),...~(c,.?)))g = g(fi-'(c),g(p-2(c) ,... &(C, c?...)) So H (i) can simply be replaced by G cf i (c )). We are thus led to the following Theorem 4: Let F be defined by (lo), G be some pre- viously defined function, and let . zmin=min~Ifj(c)=x), h (z ) + if z =0 then c else f (h (z -1)). Then F(x, h(z), G(h(z))) = G(x) for O~ZZZ;~~ (12) iff F(x, c, c? = G(x). Example 3: Let the definition of a function F be given as F 619 Y2r Y3) + if 6YYz) thea Y3 else F tit, Y2+1, (Y2+l)*Y3) We define h by h(z) - if z=O then 0 else h(z-l)+l, obtaining, simply, h (z ) = z . Also in this case we have . ‘mill = min (i I (i =x)} = x. So we can generalize F (x , 0,l) = x ! into F(x,z,z!)=x! for O<ZZX. In the above discussion, we have used the sim- plest case of the theorem in (10). Now suppose that the initial value of y2 is not a constant but some function M (x). We can derive the value of the accu- mulator at the recursion depth z if we assume the case whx)) = M(x) for all OSzSimin, where h is defined by hcYl9Yz) - lf (y2=O) then M(yI) el=f (hOI Y2-w For example, an M (x) with this property is given by NYl) - if PO*) thenY1 e~wnY1)), where f- is the functional inverse of f, that is, f v(y))=y . Now we can write M(x) = MV(x)) = Mfyx)) = - - - = M (pJ(x)) = J’i”‘(x) h(x, z) = f z @-““‘(x)) = p”“-“(x), for all 05 z I iti,. M (h (x , z)) = A4 #i”“(x)) = f”“(x) = M (x). We can therefore generalize F(x, M(x), c? = G(x) into F(x,h(x,z),G(h(x,z))=G(x), for05zlimin. Using the relation between f and its inverse f-, the above can be rewritten as F(x, h’(x, 4, G(h’(x,z))) = G(x), for 01 z 5 i ‘min, where h ‘(x , z ) + if z =0 then x else r(h ‘(x)), i ‘tin = min {i Ip (h ‘(x , i))). Example 4: Let the functions A4 and F be defined as follows: mYl* Y2) + ifYl<Y2~~Y24-wYlJ*y2) UYI, ~29 ~39 ~4) * if cy*=Y2) then <Y39 Y4’ ekif (y3 2 y2 div 2) then F bl, y2 div 2,y3 - y2 div 2, 2*y4+l) else F 011, ~2 div 2, ~3~2~4)~ Since (292) div 2 = ~2, we can apply the above method, generalizing F (x2, M (xl, xz), x1, 0) = <xl mod x2, x1 div x2> into F(~~,h(x~,z),xlm~h(x2,z),~1d~vh(x2,~)) = <x1 mod x2, x1 div x2> for Oszziimin, where h(.Yl,Yz) * if y2= 0 then y 1 dse 2*h (yr, y2-1), that is, WYl, Yz) = eY1. CONCLUSION We have discussed the problem of generalizing theorems about recursively defined functions in such a way that the generalized form of the theorems is more amenable to proof by induction. We have presented, motivated, and illustrated some heuristics for carrying out the generalization for certain pat- terns of theorems and recursive definitions. Our heuristics are given in terms of definitional schemas. Given a functional definition, we try to find a matching schema and look for certain auxiliary functions satisfying some conditions dependent on the schema. This seems to be a systematic approach, as opposed to the ad hoc approach of, say, replacing constants by expressions. Furthermore, our generali- zation heuristics have been derived by analyzing recursive computational sequences. Whenever these heuristics apply, the generalized theorems are true if and only if the original theorems are true. This is not the case with the heuristics currently found in the literature. It may be possible to apply similar gen- eralization methods to other types of definitional schemas, and develop a catalog of heuristics for different patterns of recursive definitions. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. REFERENCES Aubin, R.: “Some generalization heuristics in proofs by induction”, Proc. IRIA Colloq. on Prov- ing and Improving Programs, Arc et Senans, pp. 197-208 (July 1975). Aubin, R.: “Mechanizing structural induction”, Ph.D. dissertation, (1976). University of Edinburgh Boyer, R.S. and Moore, J.S.: “Proving theorems about LISP functions”, JAChf 22, 1, pp. 129-W (January 1975). Boyer, R.S. and Moore, J.S.: A Computatio~l Logic, Academic Press, New York (1979). Greif, I. and Waldinger, R.S.: ‘A more mechani- cal heuristic approach to program verification”, Proc. Intl. Symp. on Programming, Paris, pp. 83-90 (April 1974). Huet, GP. and Huillot, J.-M.: “Proof by induc- tion in equational theories with constructors”, JCSS 25,2, pp. 239-266 (Oct. 1982). Manna, Z.: Mathematical Theory of Computation, McGraw Hill, New York (1974). Manna, Z. and Waldinger, R.: “‘Synthesis: dreams - programs”, IEEE Trans. Software Engintering, SE-S, 4, pp. 294-328 (July 1979). Misra, J.: “Some aspects of the verification of loop computations”, IEEE Trans. Software Engineering, SE4,6, pp. 478-486 (Nov. 1978). Moore, J.S.: “Introducing iteration into the pure LISP theorem prover”, IEEE Trans. Software Engineering, SE-I, 3, pp. 328-338 (May 1975). Morris, HJ. and Wegbreit, B.: “Subgoal induc- tion”, CACM 20,4, ~~209-222 (April 1977). Tamir, M.: “ADI: Automatic derivation of invaraiants”, IEEE Trans. Sofhvare Engineering, SE-6, 1, pp. 40-48 (Jan. 1980). Wegbreit, B.: ‘The synthesis of loop predicates”, CACM 17,2, pp. 102-112 (Feb. 1974). 5
1984
17