text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
i Syntactic Processing Martin Kay Xerox Pals Alto Research Center In computational linguistics, which began in the 1950's with machine translation, systems that are based mainly on the lexicon have a longer tradition than anything else---for these purposes, twenty five years must be allowed to count as a tradition. The bulk of many of the early translation systems was made up by a dictionary whose entries consisted of arbitrary instructions In machine language. In the early 60's, computational llnsulsts---at least those with theoretical pretentlons---abandoned this way of doing business for at least three related reasons: First systems containing large amounts of unrestricted machine code fly in the face of •II principles of good programming practice. The syntax of the language in which linguistic facts are stated is so remote from their semantics that the opportunities for error are very great and no assumptions can be made •bout the effects on the system of Invoking the code associated wlth any given word. The systems became virtually unmaintainabl• and eventually fell under their own weight. Furthermore, these failings were magnified as soon as the attempt was made to impose more structure on the overall system. A general backtracking sohsme. for example, could •11 too easily be thrown into complete disarray by an instruction in s singl• dictionary entry that affected the control stack. Second. the power of general, and particularly nondeterminlstlc, algorithms In syntactic analysis came to be appreciated, if not overappreciated. Suddenly. It was no longer necessary to seek local criteria on which to ensure the correctness of individual decisions made by the program provided they were covered by more global criteria. Separation of program and linguistic data became an overriding principle and. since it was most readily applied to syntactic rules, these became the maln focus of attention. The third, and doubtless the most important, reason for the change was that syntactic theories in which • grammar was seen as consisting of • set of rules. preferably including transformational rules, captured the Imagination of the most influential nonoomputational linguists, and computational linguists followed suite if only to maintain theoretical respsotablllty. In short, Systems with small sets of rules in • constrained formalism and simple lexlcal entries apparently made for simpler. cleaner, and more powerful programs while setting the whole enterprise on a sounder theoretical footing. The trend is now In the opposite direction. There has been a shift of emphasis away from highly structured systems of complex rules as the principle repository of Information •bout the syntax of • language towards • view In which the responsibility ia distributed among the lexicon, semantic parts of the linguistic description, and • cognitive or strategic component, Concomitantly. Interest has shifted from algorithms for syntactic analysis and generation, tn which the control structure and the exact sequence of events are paramount, to systems in which • heavier burden Is carried by the data structure and in which the order o~ events is • matter of strategy. This new trend is • common thread running through several of the papers in this section, Various techniques for syntactic analysis, not•sly those based on some form of Augmented Transition Network (ATN). represent grammatical facts In terms of executabl• machine code. The danger• to which thin exposed the earlier system• •r• avoided by ln~i~tinR that this code by compiled from 8tat•ments in a torm•llsm that allows only for lingutsticaJly motivated operations on carefully controlled parts of certain data structures. The value of nondeterminl•tic procedures is undlmlni•hed, but it has become clear that It does not rest on complex control structures and a rigidly determined sequence of events. In discussing the syntactic processors that we have developed, for example, Ron Kaplan and I no longer flnd it useful te talk in terms of a parsing algorithm. There •re two central data structures, a chart and •n agenda. When additions tO the chart slve rise to certain kinds of configurations in which some element cont•t,s executable code, • task is created and placed on the • good•. Tasks are removed from the agenda and executed in an order determined by strategic considerations which constitute part cf the linguistic theory. Strategy can determine only the order in which alternative analyses are produced. ~any traditional distinctions, such as that between top- down and bottom-up processing, no longer apply to the procedure as a whole but only to partlcuisr strategies or their parts. Thls looser or|snlzatlon of programs for syntactic processing came. at least in pert. from e generally felt need to break down the boundaries that had traditionally separated morphological, syntactic, and semantic processes. Research dlrectad towards speech understanding systems was quite unable to r•spent these boundaries because, in the face of unc,rtair data. local moves in the analysis on one lever required confirmation from other levels so that s common data structure for •II levels of analysis and • schedule that could change continually were of the eseenoe. Puthermore. there was a mouvement from within the artificial-intelligence community to eliminate the boundaries because, frnm that perspective, they lacked sufficient theoretical Justification. Zn speech research In particular, and artificial Intelligence in general, the lexicon took on an important Position if only because it la th,~-~e that the units or meaning reside. Recent pro ..sols t, linguistic theory involve s larger role for the lexicon. Eresnan (1978) has argued persuasively that the full mechanism of transformational rules can. and should, be dispensed with except in cases Of Uhbountte~ movement such me relatlvlutlon and toploallast~cn, The remaining members of the familiar ltst 0¢ transformations can be handled by weaker devices in the lexlcon and, since they all turn out to ~e lexically |•yarned. this IS the appropriate place t~ state the information. Against this background, the papers that follow, different though they are in many usye. constitute fairly coherent set. Cerboflell ~omea ~rom ~ artificial-tntelligenne tradition and IS ge~Qral~) concerned With the meafliflSs of wards end the ways |~ which they are collected to give the mesntnRs of p~par~ hl oxploraa w~ya ~n Nh~oh ~hli prooaaa q~fl ba aHa 50 r~loo5 bank on 15a~1~ ~0 r111 iipl ~fl 5ha l~x~on ~y ~ppropr~nS~ ana!ya%a of 5he seaSoNS, A5 ~5~ bUa~ 5h~ ~eShod %~ fPot~r rrm a%mll~r ~rk %n aynS~a, ~a mtaatnS ~Iman5 Li 5rinSed am 5hou|h %5 hid ~h~Savar proparS~aS allow a =~heren5 mnalym~a o~ ~ha larpr unlS-.-.aay a a~nsanqe, or parairaph~---%n whX~h 15 ~ ttabaddad, Thaaa propar51aa are ~han enSor~ ala%na5 ~5 tn ~h~ %ex%~on for NS.ra .as, The pr~blm, whloh %a fa©~d ~n 5h~a paper, ~ 5ha5 5he ~aOt~lllSy 5ha~ ~ho lqXloOn La dafta~en~ mua5 ~a rased %n ralpa~5 of ~11 ~orda baoauae, even when ~hare %a ~n anSry tn 5hi %ax%con~ 15 moy no5 a~pply 5h~ raid%hi raq~lred Xn 5ha oaaa off hlnd, ~kaa11, %1kS Girb~naL1 ~a oan=arned w%~h 5hl moan~nla of ~orda and hi %a lalid 50 a ~{a. of ~rda aa IQS~VO llenSI, The • l~n Pg~e 9f 5ha l~lSql~l ~a 5o los aa ~oderaSor~ Kwaany and ~nhe%~er have a oonGern ~o ¢arbone~%vao ~en prob~m= at%so ~n ana%yi~a, ~hey Look for deftQtenQlea tn 5he 5ix5 rlSher 5ban ~n 5he ~ex~aon and 5hi rules, Z5 la no Lndtotaen~ of o15hee piper 5hl5 5hly provtde no Hay of dl=51n|ulah%nl 5hi salsa, for ~hls t= olaarl¥ a aaparaSe on~erprtae, Kwuny and $onhatmar prairie proiroaatvel¥ ~iKenln| 5ha requlrwent| 5ho~ 5ha%r aneLyi~a ays~ma mikes of a sepia5 of 5Ix5 so ~haS, Lf t5 does nob mooord wish ~ha boa5 pr%noLpnla of oQmpoa~%on, an anllyaLs san 8~tl1 be round by 5ak~n~ I lea dmand~nl vtew of tS, Suoh a ~tohnLqui olcarly re8~l on I re|~ma %n whloh 5he aoheduXtnl of events 1= rala5%valy free end 5he oon~rol a~ruo~re relo51vely free, 3hip%re 8howl how I a~ronl da~a a~ruotur$ and a weak oon~rol lSruo~ure make L5 polalble ~O ex~end 5he ATN beyond 5he inalyal= of one dlmena&onll aSr~np 5o =amarillo aa~rka. The rnu15 %a a ~o5a1 ayaSem w~Sh remarkable aonata~enoy in 5he meShoda appl%ed I& ill %evils and, praaumably, aorreapondln| a~mplLol&y and olartSy Ln 5he arohl~eo~ure or ~he =ya~m la i whole. AZlen 18 one o~ ~he formoa~ Qon~rlbu&or= ~o reaearoh on 8peeoh undera~nd~ni, end 8poeah prooeailn8 In sonora1. HI aSruala 5he need fop a&ronily Ln~orio~%n~ amponen~a i~ d%~feren~ levol~ of analy=la ~nd, ~o ~ha~ ix~en~, iriues for ~he K%fld Of da~a- d~reo~ed me&hods Z hive ~rted 5o ahlrio~er~ze. A~ ~1r8~ read,ill, [18ifli~ld~*8 paper ippeara leil~ wlll~ni ~o 11e Ln my Procrua~iin bed, for 1~ appears tO be ~on~erned w~h 5hi t%fler pO~flta Of aliorlSl'~t~o dealin and, 50 in ix~in~, 5his La ~rue. ~J~, 5he ~o Ipproaohea ~o 8ynSao~e inaZyola ~hm~ are simpered 5urn ou~ 50 be, In my 5irma, aliorl~h~ollZy ~Hlak. The moi~ fundmen~il tsoue8 ~ha~ are beta| dlaaulaid ~heri~ori 5urn ou~ ~0 oonoern vha~ Z hive sailed ~hi a~ra~iito ocaaponen~ o~ 11niu%s~%o 5hairy, 5ha~ La wish ~he rules aoeordlfli ~o wh%oh aSontto 5i8k8 %n 5he anilya~s princes ire sohedulod. Re~erenoe apiarian, Joan (1978) "A Rei128~o Trina~ormm&%onaZ Granltlar" lfl Halli, oresnin and H~ZIP (ida.) L~niu~a~io Theory lad PayeholoiLoaZ RIIILby, The HZT PPIil.
1979
1
Semantics of Conceptual Graphs John F. Sowa IBM Systems Research Institute 205 East 42nd Street New York, NY 10017 ABSTRACT: Conceptual graphs are both a language for representing knowledge and patterns for constructing models. They form models in the AI sense of structures that approxi- mate some actual or possible system in the real world. They also form models in the logical sense of structures for which some set of axioms are true. When combined with recent developments in nonstandard logic and semantics, conceptual graphs can form a bridge between heuristic techniques of AI and formal techniques of model theory. I. Surface Models Semantic networks are often used in AI for representing meaning. But as Woods (1975) and McDermott (1976) ob- served, the semantic networks themselves have no well-defined semantics. Standard predicate calculus does have a precisely defined, model theoretic semantics; it is adequate for describ- ing mathematical theories with a closed set of axioms. But the real world is messy, incompletely explored, and full of unex- pected surprises. Furthermore, the infinite sets commonly used in logic are intractable both for computers and for the human brain. To develop a more realistic semantics, Hintikka (1973) proposed surface models as incomplete, but extendible, finite constructions: Usually, models are thought of as being given through a specifi- cation of a number of properties and relations defined on the domain. If the domain is infinite, this specification (as well as many operations with such entities) may require non-trivial set- theoretical assumptions. The process is thus often non-finitistic. It is doubtful whether we can realistically expect such structures to be somehow actually involved in our understanding of a sen- tence or in our contemplation of its meaning, notwithstanding the fact that this meaning is too often thought of as being determined by the class of possible worlds in which the sentence in question is true. It seems to me much likelier that what is involved in one's actual understanding of a sentence S is a mental anticipa- tion of what can happen in one's step-by-step investigation of a world in which S is true. (p. 129) The first stage of constructing a surface model begins with the entities occurring in a sentence or story. During the construc- tion, new facts may he asserted that block certain extensions or facilitate others. A standard model is the limit of a surface model that has been extended infinitely deep, but such infinite processes are not a normal part of understanding. This paper adapts Hintikka's surface models to the formal- ism of conceptual graphs (Sowa 1976, 1978). Conceptual graphs serve two purposes: like other forms of semantic net- works, they can be used as a canonical representation of mean- ing in natural language; but they can also be used as building blocks for constructing abstract structures that serve as models in the model-theoretic sense. • Understanding a sentence begins with a translation of that sentence into a conceptual graph. • During the translation, that graph may be joined to frame- like (Minsky 1975) or script-like (Schank & Ahelson 1977) graphs that help resolve ambiguities and incorporate background information. • The resulting graph is a nucleus for constructing models of possible worlds in which the sentence is true. • Laws of the world behave like demons or triggers thai monitor the models and block illegal extensions. • If a surface model could be extended infinitely deep, the result would be a complete standard model. This approach leads to an infinite sequence of algorithms ranging from plausible inference to exact deduction; they are analogous to the varying levels of search in game playing pro- grams. Level 0 would simply translate a sentence into a con- ceptual graph, but do no inference. Level I would do frame- like plausible inferences in joining other background graphs. Level 2 would check constraints by testing the model against the laws. Level 3 would join more background graphs. Level 4 would check further constraints, and so on. If the const- raints at level n+l are violated, the system would have to backtrack and undo joins at level n. If at some level, all possi- ble extensions are blocked by violations of the laws, then that means the original sentence (or story) was inconsistent with the laws. If the surface model is infinitely extendible, then the original sentence or story was consistent. Exact inference techniques may let the surface models grow indefinitely; but for many applications, they are as im- practical as letting a chess playing program search the entire game tree. Plausible inferences with varying degrees of confi- dence are possible by stopping the surface models at different levels of extension. For story understanding, the initial surface model would be derived completely from the input story. For consistency checks in updating a data base, the initial model would be derived by joining new information to the pre- existing data base. For question-answering, a query graph would be joined to the data base; the depth of search permit- ted in extending the join would determine the limits of com- plexity of the questions that are answerable. As a result of this theory, algorithms for plausible and exact inference can be compared within the same framework; it is then possible to make informed trade-offs of speed vs. consistency in data base updates or speed vs. completeness in question answering. 2. Conceptual Graphs The following conceptual graph shows the concepts and relationships in the sentence "Mary hit the piggy hank with a hammer." The boxes are concepts and the circles are concep- tual relations. Inside each box or circle is a type label that designates the type of concept or relation. The conceptual relations labeled AONI". INST. and PTNT represent the linguistic cases agent, instrument, and patient of case grammar. 39 PERSON: Mary Conceptual graphs are a kind of semantic network. See Findler (1979) for surveys of a variety of such networks that have been used in AI. The diagram above illustrates some features of the conceptual graph notation: • Some concepts are generic. They have only a type label inside the box, e.g. mT or HAMMEa • Other concepts are individuaL They have a colon after the type label, followed by a name (Mary) or a unique identifi- er called an individual marker (i22103). To keep the diagram from looking overly busy, the hierarchy of types and subtypes is not drawn explicitly, but is determined by a separate partial ordering of type labels. The type labels are used by the formation rules to enforce selection constraints and to support the inheritance of properties from a supertype to a subtype. For convenience, the diagram could be linearized by using square brackets for concepts and parentheses for conceptual relations: [ PERSON:Mary]-.~ AGNT)-~( HIT:c I ]~--4 INST).~-(HAMMEI~.] [HIT:c I ]4--( PTNT).~---[P[ GO Y-B A NK:i22 I03] Linearizing the diagram requires a coreference index, el, on the generic concept HiT. The index shows that the two occur- rences designate the same act of hitting. If mT had been an individual concept, its name or individual marker would be sufficient to indicate the same act. Besides the features illustrated in the diagram, the theory of conceptual graphs includes the following: • For any particular domain of discourse, a specially desig- nated set of conceptual graphs called the canon, • Four canonical formation rules for deriving new canonical graphs from any given canon, • A method for defining new concept types: some canonical graph is specified as the differentia and a concept in that graph is designated the genus of the new type, • A method for defining new types of Conceptual relations: some canonical graph is specified as the relator and one or more concepts in that graph are specified as parameters, • A method for defining composite entities as structures having other entities as parts, • Optional quantifiers on generic concepts, • Scope of quantifiers specified either by embedding them inside type definitions or by linking them with functional dependency arcs, • Procedural attachments associated with the functional dependency arcs, • Control marks that determine when attached procedures should be invoked. These features have been described in the earlier papers; for completeness, the appendix recapitulates the axioms and defi- nitions that are explicitly used in this paper. Heidorn's (1972, 1975) Natural Language Processor (NLP) is being used to implement the theory of conceptual graphs. The NLP system processes two kinds of Augmented Phrase Structure rules: decoding rules parse language inputs and create graphs that represent their meaning, and encoding ru/es scan the graphs to generate language output. Since the NLP structures are very similar to conceptual graphs, much of the implementation amounts to identifying some feature or combination of features in NLP for each construct in concep- tual graphs. Constructs that would be difficult or inefficient to implement directly in NLP rules can be supported by LISP functions. The inference algorithms in this paper, however, have not yet been implemented. 3. Log/caJ Connect/yes Canonical formation rules enforce the selection constraints in linguistics: they do not guarantee that all derived graphs are true, but they rule out semantic anomalies. In terms of graph grammars, the canonical formation rules are context- free. This section defines logical operations that are context- sensitive, They enforce tighter constraints on graph deriva- tions, but they require more complex pattern matching. For- marion rules and logical operations are complementary mecha- nisms for building models of possible worlds and checking their consistency, Sowa (1976) discussed two ways of handling logical oper- ators in conceptual graphs: the abstract approach, which treats them as functions of truth values, and the direct approach, which treats implications, conjunctions, disjunctions, and nega- tions as operations for building, splitting, and discarding con- ceptual graphs. That paper, however, merely mentioned the approach; this paper develops a notation adapted from Oantzen's sequents (1934), but with an interpretation based on Beinap's conditional assertions (1973) and with computa- tional techniques similar to Hendrix's partitioned semantic networks (1975, 1979). Deliyanni and Kowalski (1979) used a similar notation for logic in semantic networks, but with the arrows reversed. Definition: A seq~nt is a collection of conceptual graphs divided into two sets, called the conditions ut ..... Un and the anergons vt,...,v,,, It is written Ul,...,Un "* vl,...,Vm. Sever- al special cases are distinguished: • A simple assertion has no conditions and only one assertion: -.. v. • A disjunction has no conditions and two or more assertions: ..m. PI,...,Vm. • A simple denial has only one condition and no assertions: u -.... • A compound denial has two or more conditions and no assertions: ut,...,un -... • A conditianal assertion has one or more conditions and one or more assertions: ut,...,un .... Vl....,v~ • An empty clause has no conditions or assertions: --.,. • A Horn clo,ue has at most one assertion; i.e. it is el- ther an empty clause, a denial, a simple assertion, or a conditional assertion of the form ut ..... ,% --4, v. For any concept a in an assertion vi, there may be a con- cept b in a condition u/ that is declared to be coreferent with a. Informally, a sequent states that if all of the conditions are true, then at least one of the assertions must be true. A se. quent with no conditions is an unconditional assertion; if there 40 are two or more assertions, it states that one must be true, hut it doesn't say which. Multiple asserth)ns are necessary for generality, but in deductions, they may cause a model to split into models of multiple altei'native worlds. A sequent with no assertions denies that the combination of conditions can ever occur. The empty clause is an unconditional denial; it is self- contradictory. Horn clauses are special cases for which deduc- tions are simplified: they have no disjunctions that cause models of the world to split into multiple alternatives. Definition: Let C be a collection of canonical graphs, and let s be the sequent ut ..... Un -', vl ..... vm. • If every condition graph is covered by some graph in C, then the conditions are said to be salisfied. • If some condition graph is not covered by any graph in C, then the sequent s is said to be inapplicable to C. If n---0 (there are no conditions), then the conditions are trivially satisfied. A sequent is like a conditional assertion in Belnap's sense: When its conditions are not satisfied, it asserts nothing. But when they are satisfied, the assertions must be added to the current context. The next axiom states how they are added. Axiom: Let C be a collection of canonical graphs, and let s be the sequent ul ..... u, -,- v~ ..... v,,,. If the conditions of s are satisfied by C, then s may be applied to C as follows: • If m,=l) (a denial or the empty clause), the collection C is said to be blocked. • If m=l (a Horn clause), a copy of each graph ui is joined to some graph in C by a covering join. Then the assertion v is added to the resulting collection C'. • If m>2, a copy of each graph ui is joined to some graph in C by a covering join. Then all graphs in the resulting collection C' are copied to make m disjoint c~)llections identical to C'. Finally, for each j from I to rn, whe assertion v I is added to the j-th copy of C'. After an assertion v is added to one of the collections C', each concept in v that was declared to be coreferent with some concept b in one of the conditions ui is joined to that concept to which b was joined. When a collection of graphs is inconsistent with a sequent, they are blocked by it. If the sequent represents a fundamen- tal law about the world, then the collection represents an impossible situation. When there is only one assertion in an applicable sequent, the collection is extended. But when there are two or more assertions, the collection splits into as many successors as there are assertions; this splitting is typical of algorithms for dealing with disjunctions. The rules for apply- ing sequents are based on Beth's semantic tableaux f1955), but the computational techniques are similar to typical AI methods of production rules, demons, triggers, and monitors. Deliyanni and Kowalski (1979) relate their algorithms for logic in semantic networks to the resolution principle. This relationship is natural because a sequent whose conditions and assertions are all atoms is equivalent to the standard clause form for resolution. But since the sequents defined in this paper may be arbitrary conceptual graphs, they can package a much larger amount of information in each graph than the low level atoms of ordinary resolution. As a result, many fewer steps may be needed to answer a question or do plausible inferences. 4. Laws, Facts, and Possible Worlds Infinite families of p~ssible worlds are computationally intractable, hut Dunn (1973) showed that they are not needed for the semantics of modal logic. He considered each possible world w to be characterized by two sets of propositions: laws L and facts F. Every law is also a fact, but some facts are merely contingently true and are not considered laws. A prop- osition p is necessarily true in w if it follows from the laws of w, and it is possible in w if it is consistent with the laws of w. Dunn proved that semantics in terms of laws and facts is equivalent to the possible worlds semantics. Dunn's approach to modal logic can be combined with Hintikka's surface models and AI methods for handling de- faults. Instead of dealing with an infinite set of possible worlds, the system can construct finite, but extendible surface models. The basis for the surface models is a canon that contains the blueprints for assembling models and a set of laws that must be true for each model. The laws impose obligatory constraints on the models, and the canon contains common background information that serves as a heuristic for extending the models. An initial surface model would start as a canonical graph or collection of graphs that represent a given set of facts in a sentence or story. Consider the story, Mary hit the piggy bank with a hammer. She wanted to go to the movies with Janet. but she wouldn't get her allowance until Thursday. And today was only Tuesday. The first sentence would be translated to a conceptual graph like the one in Section 2. Each of the following sentences would be translated into other conceptual graphs and joined to the original graph. But the story as stated is not understanda- ble without a lot of background information: piggy banks normally contain money; piggy banks are usually made of pottery that is easily broken; going to the movies requires money; an allowance is money; and Tuesday precedes Thurs- day. Charniak (1972) handled such stories with demons that encapsulate knowledge: demons normally lie dormant, but when their associated patterns occur in a story, they wake up and apply their piece of knowledge to the process of under- standing. Similar techniques are embodied in production sys- tems, languages like PLANNER (Hewitt 1972), and knowl- edge representation systems like KRL (Bobrow & Winograd 1977). But the trouble with demons is that they are uncon- strained: anything can happen when a demon wakes up, no theorems are possible about what a collection of demons can or cannot do, and there is no way of relating plausible reason- ing with demons to any of 'the techniques of standard or non- standard logic. With conceptual graphs, the computational overhead is about the same as with related AI techniques, but the advan- tage is that the methods can be analyzed by the vast body of techniques that have been developed in logic. The graph for "Mary hit the piggy-bank with a hammer" is a nucleus around which an infinite number of possible worlds can be built. Two individuals, Mary and rlcc~Y-a^NK:iZzloL are fixed, but the particular act of hitting, the hammer Mary used, and all other circumstances are undetermined. As the story continues, some other individuals may be named, graphs from the canon may be joined to add default information, and laws of the world in 41 the form of sequents may be triggered (like demons) to en- force constraints. The next definition introduces the notion of a world bas~ that provides the building material (a canon) and the laws (sequents) for such a family of possible worlds. Definition: A world basis has three components: a canon C, a finite set of sequents L called laws, and one or more finite collections of canonical graphs {Ct ..... Co} called contexts. No context C~ may be blocked by any law in L. A world basis is a collection of nuclei from which complete possible worlds may evolve. The contexts are like Hintikka's surface models: they are finite, but extendible. The graphs in the canon provide default or plausible information that can be joined to extend the contexts, and the laws are constraints on the kinds of extensions that are possible. When a law is violated, it blocks a context as a candidate for a possible world. A default, however, is optional; if con- tradicted, a default must be undone, and the context restored to the state before the default was applied. In the sample story, the next sentence might continue: "The piggy bank was made of bronze, and when Mary hit it, a genie appeared and gave her two tickets to Animal House." This continuation violates all the default assumptions; it would be unreasonable to assume it in advance, but once given, it forces the system to back up to a context before the defaults were applied and join the new information to it. Several practical issues arise: how much backtracking is necessary, how is the world basis used to develop possible worlds, and what criteria are used to decide when to stop the (possibly infinite) extensions. The next sec- tion suggests an answer. 5. Game T h ~ Se~md~ The distinction between optional defaults and obligatory laws is reminiscent of the AND-OR trees that often arise in AI, especially in game playing programs. In fact, Hintikka (1973, 1974) proposed a game theoretic semantics for testing the truth of a formula in terms of a model and for elaborating a surface model in which that formula is true. Hintikka's approach can be adapted to elaborating a world basis in much the same way that a chess playing program explores the game tree: • Each context represents a position in the game. • The canon defines [Sossible moves by the current player, • Conditional assertions are moves by the opponent. • Denials are checkmating moves by the opponent. • A given context is consistent with the laws if there exists a strategy for avoiding checkmate. By following this suggestion, one can adapt the techniques developed for game playing programs to other kinds of reason- ing in AI. Definition: A game over a world basis W is defined by the following rules: • There are two participants named Player and Oppo- m~nt. • For each context in W, Player has the first move. • Player moves in context C either by joining two graphs in C or by selecting any graph in the canon of W that is joinable to some graph u in C and joining it maxi- really to u. If no joins are possible, Player passes. Then Opponent has the right to move in context C. • Opponent moves by checking whether any denials in W are satisfied by C. If so, context C is blocked and is deleted from W. If no denials are satisfied, Oppo- nent may apply any other sequent that is satisfied in C. If no sequent is satisfied, Opponent passes. Then Player has the right to move in context C. • If no contexts are left in W, Player loses. • If both Player and Opponent pass in succession, Player wins. Player wins this game by building a complete model that is consistent with the laws and with the initial information in the problem. But like playing a perfect game of chess, the cost of elaborating a complete model is prohibitive. Yet a computer can play chess as well as most people do by using heuristics to choose moves and terminating the search after a few levels. To develop systematic heuristics for choosing which graphs to join, Sown (1976) stated rules similar to Wilks' preference semantics ( 1975). The amount of computation required to play this game might be compared to chess: a typical middle game in chess has about 30 or 40 moves on each side, and chess playing programs can consistently beat beginners by searching only 3 levels deep; they can play good games by searching 5 levels. The number of moves in a world basis depends on the number of graphs in the canon, the number of laws in L, and the num- ber of ~aphs in each context. But for many common applica- tions, 30 or 40 moves is a reasonable estimate at any given level, and useful inferences are possible with just a shallow search. The scripts applied by Schank and Abelson (1977), for example, correspond to a game with only one level of look-ahead; a game with two levels would provide the plausible information of scripts together with a round of consistency checks to eliminate obvious blunders. By deciding how far to search the game tree, one can derive algorithm for plausible inference with varying levels of confidence. Rigorous deduction similar to model elimination (Loveland 1972) can be performed by starting with laws and a context that correspond to the negation of what is to be proved and showing that Opponent has a winning strategy. By similar transformations, methods of plausible and exact inference can be related as variations on a general method of reasoning. 6. Appendix: Summary of the Formalism This section summarizes axioms, definitions, and theorems about conCeptual graphs that are used in this paper. For a more complete discus- sion and for other features of the theory that are not used here, see the eartier articles by Sown (1976, 1978). Definition 1: A comcepm~ gmmp& is a finite, connected, bipartite graph with nodes of the first kind called concepu and nodes of the second kind called conceptual relatWn$. Definition 2: Every conceptual relation has one or more arc~, each of which must be attached to a concept. If the relation has n arcs. it is said to be n-adic, and its arcs are labeled I, 2 ..... n. The most common conceptual relations are dyadic (2-adic), but the definition mechanisms can create ones with any number of arcs. Although the formal defin/tion says that the arcs are numbered, for dyadic relations. arc I is drawn as an arrow pointin8 towards the circle, and arc 2 as an arrow point/aS away from the circle. 42 Axiom I: There is a set T of type labeLv and a function type. which maps concepts and conceptual relations into T. • If rypefa)=type(b), then a and b are said to be of the same tXpe. • Type labels are partially ordered: if (vpe(a)<_typefhL then a is said to be a subtype of b. • Type labels of concepts and conceptual relations arc disjoint, noncomparable subsets nf T: if a is a concept and • is a concep- tual relation, then a and r may never he of the same type, nor may one be a subtype of the other. Axiom 2: There is a set I=[il, i2, i3 .... } whose elements are called individual markers. The function referent applies to concepts: If a is a concept, then referentla) is either an individual marker in I or the symbol @, which may be read any. • When referentla) ~" l, then a is said to be an individual concept. • When referent(a)=@, then a is said to be a genertc concept. In diagrams, the referent is written after the type label, ~parated by a colon. A concept of a particular cat could be written as ICAT:=41331. A genetic concept, which would refer to any cat, could be written ICA'r:tiiH or simply [CATI. In data base systems, individual markers correspond to the surrogates (Codd 1979). which serve as unique internal identifiers for external entities. The symbol @ is Codd's notation for null or unknown values in a data base. Externally printable or speakable names are related to the internal surrogates by the next axiom. Axiom 3: There is a dyadic conceptual relation with type label NAME. If a relation of type NAME occurs in a conceptual graph, then the con- cept attached to arc I must be a subtype of WORD, and the concept attached to arc 2 must be a subtype of ENTITY. If the second concept is individual, then the first concept is called a name of that individual. The following graph states that the word "Mary" is the name of a particular person: ["Mary"]-.=.tNAME)-=.lPERSON:i30741. if there is only one person named Mary in the context, the graph could be abbreviated to just [PERSON:Mary], Axiom 4: The conformity •elation :: relates type labels in T to individual markers in I. If teT, tel. and t::i. then i is said to conform to t. • If t~gs and t::i. then s::i. • For any type t, t::@. • For any concept c. type(c)::referentfc). The conformity relation says that the individual for which the marker i is a surrogate is of type t. In previous papers, the terms permissible or applicable were used instead of conforms to. but the present term and the symbol :: have been adopted from ALGOL-68. Suppose the individual marker i273 is a surrogate for a beagle named Snoopy. Then BEAGLE::i273 is true. By extension, one may also write the name instead of the marker, as BEAGLE=Snoopy. By axiom 4, Snoopy also conforms to at] supertypes of BEAGLE. such as DOG::Snoopy, ANIMAL=Snoopy. or ENTITY::Snoopy. Definition 3: A star graph is a conceptual graph consisting of a single conceptual relation and the concepts attached to each of its arcs. (Two or more arcs of the conceptual relation may be attached to the same concept. ) Definition 4: Two concepts a and b are said to be joinable if both of the following properties are true: • They are of the same type: type(a)-typefb). • Either referent(a)=referent(b), referent(a)=.@, or referent(b)=.@. Two star graphs with conceptual relations r and s are said to be joinable if • and s have the same number of arcs, type(r),=rype(s), and for each i. the concept attached to arc i of r is joinable to the concept attached to arc i of s. Not all combinations of concepts and conceptual relations are mean- ingful. Yet to say that some graphs are meaningful and others are not is begging the question, because the purpose of conceptual graphs is to form the basis of a theory of meaning, To avoid prejudging the issue, the term canonical is used for those graphs derivable from a designated set called the canon. For any given domain of discourse, a canon is dcl'incd that rules out anomalous combinations. Definition 5: A canon has thrcc components: • A partially ordered ~et T of type labels. • A set I of individual marker~, with a conformily relation ::. • A finite set of conceptual graphs with type or c~Jnccl)lS and conceptual relations in T and wilh referents either let *~r markers in I. The number of possible canonical graphs may be infinite, but the canon contains a finite number from which all the others can be derived. With an appropriate canon, many undesirable graphs are ruled out as noncanonical, but the canonical graphs are not necessari!y true. T~) ensure that only truc graphs are derived from true graphs, the laws discussed in Section 4 eliminate incnnsistcnt combinations. Axiom 5: A conceptual graph is called canontrol eithcr if it is in the c:tnq)n or if it is derivable from canonical graphs by ()ne of the following canonic'a/formation •ules. I,et u and v be canonical graphs (u and v may be the same graph). • Copy: An exact copy of u is canonical. • Restrict: Let a be a concept in u, and let t be a type label where t<_typela) and t::referenrfa). Then the graph obtained by changing the type label of a to t and leaving •eferent(a) unchanged is can- onical. • Join on aconcept: Let a be aconcept in u, and baconcept in v If a and b are joinable, then the graph derived by the followin~ steps is canonical: First delete b from v; then attach to a all arcs of conceptual relations that had been attached to b. If re/'eremfa) e I, then referent(a) is unchanged; otherwise, referent(a) is re- placed by referent(b). • Join on a star: Let r be a conceptual relation in u. and x a con- ceptual relation in v. If the star graphs of r and s are joinable. then the graph derived by the following steps is canonical: First delete s and its arcs from v; then for each i. join the concept attached to arc i of • to the concept that had been attached to arc i of s. Restriction replaces a type label in a graph by the label of a subtype: this rule lets subtypes inherit the structures that apply to more general types. Join on a concept combines graphs that have concepts of the same type: one graph is overlaid on the other so that two concepts of the same type merge into a single concept; as a result, all the arcs that had been connected to either concept arc connected to the single merged concept. Join on a star merges a conceptual relation and all of its attached concepts in a single operation. Definition 6: Let v be a conceptual graph, let v, be a subgraph of v in which every conceptual relation has exactly the same arcs as in v. and let u be a copy of v, in which zero or more concepts may be restricted to subtypes. Then u is called a projection of v. and ¢, is called a projective ortgin of u in v. The main purpose of projections is to define the rule of join on a common projection, which is a generalization of the rules for joining on a concept or a star. Definition 7: If a conceptual graph u is a projection of both v and w. it is called a common projection of v and w, Theorem l: If u is a common projection of canonical graphs t, and w, then v and w may be joined on the common projection u to form a canonical graph by the following steps: • Let v' be a projective origin of u in v. and let w, be a projective origin of u in w. • Restrict each concept of v, and ~ to the type label of the corre- sponding concept in u. • Join each concept of v, to the corresponding concept of w,. • Join each star graph of ¢ to the corresponding star of ~ 43 The concepts and conceptual relations in the resulting graph consist of those in v-t~, w-~, and a copy of u. Definition 8: If v and w are joined on a common projection u. then all concepts and conceptual relations in the projective origin of u in v and the projective origin of u in ~v are said to be covered by the join. in particular, if the projective origin of u in v includes all of v. then the entire graph v is covered by the join. and the join is called a covering join of v by w, Definition 9: Let v and w be joined on a common projection u. The join is called extendible if there exist some concepts a in v and b in w with the following properties: • The concepts a and b were joined to each other. • a is attached to a conceptual relation • that was not covered by the join. • b is attached to a conceptual relation s that was not covered by the join. • The star graphs of r and s are joinable. If a join is not extendible, it is called mn.ximal. The definition of maximal join given here is simpler than the one given in Sown (1976), but it has the same result. Maximal joins have the effect of Wilks' preference rules (1975) in forcing a maximum connectivity of the graphs. Covering joins are used in Section 3 in the rules for apply- ing sequeots. Theorem 2: Every covering join is maximal. Sown (1976) continued with further material on quantifiers and procedural attachments, and Sown (1978) continued with mechanisms for defining new types of concepts, conceptual relations, and composite entities that have other entities as parts. Note that the terms sort, aubaort, and well-formed in Sown (1976) have now been replaced by the terms type, subtype, and canonical. 7. Acknowledgment I would like to thank Charles Bontempo, Jon Handel, and George Heidorn for helpful comments on earlier versions of this paper. 8. References Belnap, Nuei D., Jr. (1973) "Restricted QuanUfication and Conditional Assertion." in Leblanc (1973) pp. 48-75. Beth. E. W. (1955) "Semantic Entailment and Formal Derivability," reprinted in J. Hintikka, ed., The Philoaapky of Mathematk~s, Oxford University Press, 1969. pp. 9-41. Bobrow. D. G.. & T. Winograd (1977) "An Overview of K]RL-O, a Knowl- edge Representation Language," Cognitive $cicnca, voL 1, pp. 3-46. Charniak, Eugene (1972) Toward~ a Model of Chiid~n's Story Coml~ehen- rion. AI Memo No. 266, MIT Project MAC, Cambridge, Mall. Codd. E. F. (1979) "Extending the Data Base Relational Model to Cap- ture More Meaning," to appear in Transactions on Dataha~ $yst#ma. Dellyanni. Amaryllis. & Robert A. Kowalski (1979) "Logic and Semantic Networks." Communications of the ACM, voL 22, no. 3, pp. 184--192. Dunn. J. Michael (1973) "A Truth Value Semantics for Modal Logic," in Leblanc (1973) pp. 87-100. Findler, Nicholas V., ed. (1979) Associative Networks, Academic Press, New York. Gentzen. Gerhard (1934) "Investigations into Logical Deduction," reprint- ed in M. E. Szabo, ed., The Collected Papers of Gerhard Gentxon. North-Holland. Amsterdam, 1969. pp. 68-131. Heidorn. George E. (1972) Natural LangUage [nput~ to a Simulation Programming System. Technical Report NPS-55HD72101A, Naval Postgraduate School. Monterey. Heidorn, George E. (1975) "Augmented Phrase Structure Grammar." in R. Schank & B. L, Nash-Webber. eds.. Theoretical Issues in Natural Lunguage Processing, pp. 1-5. Hendrix, Gary G. (1975) "Expanding the Utility of Semantic Networks through Partitioning," in proc. of the Fourth IJCAi, Tbilisi, Georgia, USSR, pp. 115-121. Hendrix. Gary G. (1979) "Encoding Knowledge in Partitioned Networks," in Findler (1979) pp. 51-92. Hewitt, Carl (1972) Description and Theoretical Analys~ (Using Schemata) o[ PLANNER. AI Memo No. 251, MIT Project MAC, Cambridge. Mass. Hintiid~a. Jaakko (1973) "Surface Semantics: Definition and its Motiva- tion," in Leblanc (1973) pp. 128-147. Hintikka, Jaakko (1974) "Quantifiers vs. Quantification Theory," Lingu/a- tic Inq,,~ry, vol. 5, no. 2. pp. 153-177. Hintikka, Jaakko. & Esa Saarinen (1975) "Semantical Games and the Bach-Peters Paradox." Theoretical Linguistics. vol. 2, pp. 1-20. Leblanc. Hughes, ed. (1973) Truth. Syntax. and Modaliry, North-Holland Publishing Co.. Amsterdam. Loveland. D. W. (1972) "A Unifying View of Some Linear Herbrand Procedures," Journal of the ACM, voi. 19, no. 2, pp. 366-384. McDermott, Drew V. (I 976) "Artificial Intelligence Meets Natural Stupid- ity," SIGART Newalerler. No. 57, pp. 4-9. Minsky, Marvin (1975) "A Framework for Representing Knowledge." in Winston, P. H., ed.. The Psychology of Computer Vision. McGraw-Hill, New York. pp. 211-280. Schank, Roger, & Robert Abelson (1977) Scripts. Pla~, Goals and Under- standing, Lawrence Eribeum Associates, Hillsdale. N. J. Sown, John F. (1976) "Conceptual Graphs for a Data Base Interface," [BM Jaurnal of Research & Development, vol. 20, pp. 336-357. Sown, John F. (1978) "Definitional Mechanisms for Conceptual Graphs," presented at the International Workshop on Graph Grammars, Bad Hormef, Germany, Nov. 1978. Wilks, Yorick (1975) "Preference Semantics," in E. L. Keenan, ed., Formal Semantics of Nazurol Language. Cambridge University Press, pp. 329-348. Woods, William A. (1975) "What's in a Link: Foundations for Semantic Networks," in D. G. Bobrow & A. Collins. eds., Rapraenmtion and Unabnmnding, Academic PresS. New York. 44
1979
10
ON THE AUTOMATIC TRANSFORMATION OF CLASS MEMBERSHIP CRITERIA Barbara C. Sangster Rutgers University This paper addresses a problem that may arise in c]assificatzon tasks: the design of procedures for matching an instance with a set ~f criteria for class membership in such a way as to permit the intelligent handling ~f inexact, as well as exact matches. An inexact match is a comparlson between an instance and a set of criteria (or a second instance) which has the result that some, but not all, of the criteria described (or exemplified) in the second are found to be satisfied in the first. An exact match is such a comparison for which all of the criteria of the second are found to be satisfied in the first. The approach presented in this paper is t~ transform the set of criteria for class membership into an exemplary instance of a member of the class, which exhibits a set ~f characteristics whose presence is necessary and sufficient for membership in that class. Use of this exemplary instance during the matching process appears to permit important functions associated with inexact matching to be easi]y performed, and also to have a beneficial effect on the overaJ] efficiency of the matching process. 1. INTRODUCTION An important common element ~f many projects in Artificial Intelligence is the determination of whether a particular instance satisfies the criteria for membership in a particular class. Frequently, this task is a component of a larger one involving a set of instances, or a set of classes, or both. This determination need not necessarily call for an exact match between an instance and a set of criteria, but only for the "best ," or "closest ," match, by some definition of goodness or closeness. One important specification for such tasks is the capability for efficient matching procedures; another is the ability to perform inexact, as we]] as exact matches. One step towards achieving efficient matching procedures is 50 represent criteria for class membership in the same way as descriptions ~f instances. This may be done by transforming the set of criteria, through a process of symbolic instantiation, into a kind of prototypical instance, or exemplary member of the class. This permits the use of a simple matching algorithm, such as one that merely checks whether required components of the definition of the class are also present in the description of the instance. This also permits easy representation of modifications to the definition, whenever the capability of inexact matching is desired. Other ways of representing definitions of ciasses might be needed for other purposes, however. For example, the knowledge-representation language AIMDS would normally be expected to represent definitions in a more complex manner, involving the use of pattern-directed inference rules. These rules may be used, e.g., to identify inconsistencies and fill in unknown values. A representation of a definition derived through symbolic instantiation does not have this wide a range of capabilitles, but it does appear to offer advantages over the other representation for efficient matching and for easy handling of inexact matches. We might, The research reported in this paper was partially supported by the National Science Foundation under Grant #S0C-7811q08 and by the Research Foundation of the State University of New York under Grant #150-2197-A. therefore, like to be able to translate back and forth between the two forms of representation as our needs require. An algorithm has been devised for automatically trans]ating a definition in one of the two directions -- from the form using the pattern-directed inference rules intn a simpler, symboJical]y instantiated form [11]. This algorithm has been shown to work correctly for any well-formed definition in a clearly-defined syntactic class [10]. The use of the symbolically instantiated form for b~th exact and inexact matches is outlined here; using a hand-created symbolic instantiation, a run demonstrating an exact match is presented. The paper conc]udes with a discussion ~f some implications of this apprnach. 2. INRXAC T MATCHING The research project presented in this paper was motivated by the need for determining automatically whether a set of facts comprising the description of a legal case satisfies the conditions expressed in a legs/ definition, and, if not, in what respects it fails to satisfy those conditions [8], [9], [I0], [11], [13]. The need to perform this task is central to a larger project whose purpose is the representation of the definitions of certain legal concepts, and of decisions based on those concepts. inexact matching arises in the legal/judlclal domain when a legal class must be assigned to the facts of the case at hand, but when an exact match cannot be roland between those facts and any of the definitions of possible legal classes. In that situation, a reasonable first-order approximation to the way real decisions are made may be to say that the class whose definition offers the "best" or " closest" match to the facts of the case at hand is the class that should be assigned to the facts in question. That is the approach taken in the current project. In addition to the application discussed here (the assignment of an instance of a knowledge structure to one of a set of classes), inexact matching and close relatives thereof are also found in several other domains within computational linguistics. Inexact matching to a knowledge structure may also come into play in updating a knowledge base, or in responding to queries over a knowledge base [5], [6]. In the domain of syntax, an inexact matching capability makes possible the correct interpretation of utterances that are not fully grammatical with respect to the grammar being used [7]. In the domains of speech understanding and character recognition, the ability to perform inexact matching makes it possible to disregard errors caused by such factors as noise or carelessness of the speaker or writer. When an inexact match of an instance has been identified, the first step is to attempt to deal with any criteria ~nich were not found to be satisfied in the instance, but were not found not to be satisfied either -- i.e., the unknowns. At that point, if an exact match still has not been achieved, two modes of action are possible: the modification of the instance whose characterization is being sought, or the modification of the criteria by means of which a characterization is found. The choice between these two responses (or of the way in which they are combined) appears to be a function of the domain and sometimes also of the particular item in question. In general, in the 45 lesallJudlcial domain, the facts of the case, once determined, are fixed (~nless new evidence is introduced), hut the criteria For assigning a legal characterization to those facts may be modified. 3. I ~ Z ~ ~ E t ~ ~ A p.mh+mtM~my Because of. the importance of inexact ~atchlnE in the legal/judlclal domain, it is desirable to utilize a matehir~ procedure that permits useful functions related to inexant matching to be performed conveniently. Such functions include a way of. easily determining all the respects in which attempted exact matches to a particular definition might fail , a wey of. easily determinln~ what chlln~es to a definition would be suf.f.icient For an exact match with a particular case to be permitted, and a wey of ensuring that a contemplated modif.lcation to a def.inition will not introduce inconsistencies. Two f.eatures of. a representational scheme that would appear to help in performin~ these functions conveniently are SPEC1) that the scheme permit a distinction to be made between those propositions that must be t~ be true of. any instance satlsfylng the def.lnltion and any other propositions that might also be true of. the instance, and SPEC2) that the scheme permit the former set of. propositions to be expressed in a simple, ulilf.led wey, so as to redune or even eliminate the need for inf.erencing and other processing activities when the ~ntlons outlined above are performed. By satlsfyi~ SPECl, we permit the propositions which are central to the matohiDg process to he distir~ulshed from any others; by satisfying SPEC2, we permit those propositions to be accessed and manipulated (e,go, for the inexact matching Functions listed above) in an efficient and straightforward manner. Thus, the Fulfillment of 3PECI and SPEC2 slgniflcantly strengthens our ability to perform Functions central to the inexact matching process. A representational scheme that meets these specifications has been designed, and an experimental implementation performed. The approach used is to precede the matching activity proper with a one-tlme preprocessing phase, duping Milch the definition is automatically transformed from the form in which it is originally expressed into a representational scheme which appears to be more suitable to the matching task at hand. The transformation algorithm makes use of a distlnntion between those components of the definition wl~ich must be Found to be true and those whose truth either may be inferred or else is irrelevant to the matching process. The transformation is performed by means of a process of ~ inmtRntlat~nn OF the deflnition -- the translation of the de/initlon f~'om a set of criteria for satisfying the definition into an exemplary instance of the concept itself. The transformed definition resulting fro m this process appears to meet the speclf.ications given above. The input to the transformation process is a definition expressed in two parts: CCHPONENTI) a set of propositions eonslsting of relations between typed variables organized in frame form, and CCI4POMENT2) a set of' pattern-directed inference rules expressing constraints on how the propositions in CCHPONEMTI .my be Instantlated. 'rite propositions in COHPONENTI include propositions that must be found to be true of. any instance satisfying the +,,,,,=-,nor ~ o , ~ " .... //7 "°"~ Yf~NO ;~ p~ec.l ]I ÷ ,.,,o~+~"r }.i~ ~';'+'+.''''+'. , : CONPONENT1 for a staple n. 46 definition, as well as other pr~positions that do not have this quality. The output from the trans{ormation process that is used for matching with an instance is a symbolically instantiated form of the definition called the KERNEL fo~ the definition. It consists solely of a set of propositions expressing relations between instances. These are precisely those propositions whose truth must be observed in any instance satisfying the definition. Constraints on instantiation (COMPONENT2 above) are reflected in the choice of values for the instances in these propositions. Thus the KERNEL structure has the properties set forth in SPECI and SPEC2 above, and its use during the matching process may consequently be expected to help in w~rking with inexact matches. For similar reasons, use of the KERNEL structure appears also to permit a significant improvement in efficiency of the overall matching process [I0], [11]. The propositions input to the transformation process (i.e., COMPONENTI) are illustrated, for the definition of a kind of corporate reorganization called a BREORGANIZATION, in Figure I; the arcs represent relations, and the nodes represent the types of the instances between which the relations may ho]d. Several of the pattern-directed inference rules input to the transformation process (COMPONENT2) for part of the same definition are illustrated in Figure 2. The KERNEL structure for that definition output by the transformation process is illustrated in Figure 3. The propositions shown there are the ones whose truth is necessary and sufficient for the definition to have been met. Bindings constraints between nodes are reflected in the labels of the nodes; the nodes in Figure 3 represent instances. Thus, the two components represented in Figures I and 2 are transformed, for the purposes of matching, into the structure represented in Figure 3, The transformation process is described in more detail in [I0] and [11]; [10] also contains an informal proof that the transformation algorithm will work correctly for all definitions in a well-defined syntactic class. ~. ~X~CUTIONOFTHEMATCHINOPR~CESS Once the transformation of a definition has been performed, it need never again be repeated (unless the definition itself should change), and the compiled KERNEL structure may be used directly whenever a set of ((EXCHANOE X) |FF ((EXCHANOE X) IFF C(EXCHANOE X) ZFF ((EXCHANOE X) {FF TRANSI (TRAI4S T|) (X (TRANSFEROR1ACENTOF) T1) (X (TRANSPROP20BJECTQF) T1) (X (TRANSFEROR10LDO~NEROF) T|) (X (TRANSFEROR2 NEWOWNEROF) TI)] TRANS2 (TRN~S 1"2) (X (TRANSFEROR2 AOENTOF) T2) (X (TRANSPRQP~ OBJECTOF) T2) (X (TRANSFEROR2 OLDONt4ERQF) T2) (X (TRANSFERORt NEWOWNEROF) ~)3 TRANSFEROR! (ACTOR A) (X (TRANSI AOENT) A) (X (TRANSI OLDOWNER) A) (X (TRANS2 NENOWNER) A)] TRANSFEROR= (ACTOR A) (X (TRANS2 AOENr) A) (X (TRAN~2 OLDO~,qER) A) (X (THANS| NEiJO~NER) A)] Ffi_•u_re ~: A portion of COMPONENT2 or a sample definition. facts comprising a description of a legal c;Jse L~ presented-for comparison with the def(nit~n. In order to control possib]e combinat~ric diffLcu]+[es, the KERNEL structure is decomposed tnt~ a se t ~r small networks, against each of which a]] substructures ~f the same type in the case description are tes+ed f~r a structural match (STAGEI). DMATCH [15], a functL~n written by D. Touretzky, performed structural ma+chLng in the experimental implementation. The hope LS the + "small networks" can be selected from the KERNEL in such a way that matching to any single small n~twork wi|] involve a minimal degree of combinator[c compiexEty. For an exact match, the substructures that survive STAGEI (and no others) are then combined in all p~ssibie valid ways into larger networks ~f s~me degree ~f increase in complexity. A structural match ~f each ~f these structures with the corresponding substructure ~f the KERNEL is then attempted, and bindings c~nstraints between formerly separate components of the new network are thereby tested. This process is repeated wLth surviving substructures until the structural match is conducted against the KERNEL structure itself. When +he criterion for matching at each stage Ls an exact match, as described above, the survivors of the final s~age ~f structural matching represent all and ~n]y the subcases in the case description that meet the c~ndi+i~ns expressed in the definition. The execution of the marcher in the manner described above is illustrated in Figure 4. For this example, five instances of the type TRANS (TI, T2, T3, T4, TL), two instances of the type CONTROL (CI, C2), and ~wo instances of PROPERTY (06, 09) were used. The value of MAKEFULLLIST shows the survivors of STAGEI. The value of BGO shows the single valid instance of a BREORGANIZATION that can be created fr-m these components. An inexact matching capability, not currently implemented, would determine, when at any stage a match failed, I) why it had failed, and 2) how close it ned come to being an exact ms+oh. At the next stage, a combination of substructures would be submitted for consideration by the marcher only Lf it had met some criterion of proximity t~ an exact match -- either on an absolute scale, or relative to the ~ther candidates for matching. When the final stage ~f the matching process had been completed, that candidate (or those candidates) that permitted the most nearly exact match could then be Selected. In order to perform the inexact matching function outlined in the preceding paragraph, an a]g-rithm for computing distance from a exact match must be formulated. For the reasons given above, we anticipate that I) the transformation of definitions into the corresponding KERNEL structures will make that task easier, and that 2) once a distance algorit~ has been formulated, the use of the KERNEL structLLPe will contribute to performing the inexact matching f~/nction wlth efficiency and conceptual clarity. 5. CONCLUSIONS The capability for the intelligent handling of inexact matches ham been shown to be an important requirement for the representation of certain classification +.asks. A procedure has been outlined ~nereby a set of criteria for membership in a particular class may be transformed into an exemplary instance of a member of that class. 47 /y ~ ~ ~ ~o~ KeG KC.T K AS'~K CoR ffL K'r,! K~-3" ~m Ko~ : The KERNEL structure for a ftnttJon. As we have seen, use of that exemplary instance during [3] Hayes-Roth, F. 1978. "The Role of Partial and Best the matchinK process appears to permit important Y4atches in Knowledge Systems", ~ functions associated with inexact matchlnK to be easily ~ ~ , ed. by D. Waterma~ and F. performed, and also to have a bene/icial affect on the Hayes-Roth. Academlc Press. overall effiolency 0~' the matahinK process. [4] Hayes-Roth, F. and D. J. Hostow. 1975. "An ACKHQWL~DCEMENT$ Automatically Compilable Eecosnltlon Network for Structured Patterns". ~ ~ IJCAI-?%, vol. 1, The author is gratet%ll to the followin8 for cos-Mints and pp. 2~6-251. suKgestions on the work reported on in this paper: S. Amarel, V. Cissielski, L. T. MoCarty, T. Mitchell, C5] Joshi, A. K. 1978a. "Some Extensions of a System N. S. Sridha~an, and D. Touretzky. for Inference on Partial I41foMlationn. P~ttePn.Dir,~ted ~ , ed. by D. Waterman and F. R~RLTC~;RAPH¥ Hayes-Noth. Aoad clio PFess. [I] Freuder, £. C. 1978. "Syntheslzln~ Constraint [6] Joshi, A. K. 1978b. "A Nots on Partial Match of Expressions". CACM, vol. 21, pp. 958-966. Desorlptlcns: Can One Simultaneously Question (Retrieve) and Inform (Update) ?" . ?TRLA P-2 : [2] Haralick, R. M. and L. G. ShapirO. 1979. "The ~ ~ 1;1 ~ ~ ~,nsnxL~=£. Consistent LabelllnK Problem: Part I". TRRR ~ a , PINI0 re1. I, pp. 173-18~. [7] Kwasny, S. and N. K. Sondhelmsr, 1979. • U~raJaatioallty and Extra-Gr-,-,-tlcality in ~atu~al Language U~derstandlnK Systems". This volume. SECOND-CON tEXT )) (BQO) Enter HAKEFtS ~l Z81": ! PROTS ,, (PROTOTRANS$ PRQTOTRAN~ PROTOCONI"ROLI PROTO09 PROTO06) HAKEFULLLXST ~ ((0~) (Oh 09) (CI (:;2) (T'J T4 TS) (T2 T4 TS)) ((T'J T~ C2 09 06) Nil.) ~ : Sample execution of the process. 48 [8] McCarty, L. T. 1977. "Reflections on TAXMAN: An Experiment in Artificial Intelligence and Legal Reasoning". HarvmrdL~w Review, vo1. 90, pp. 837-893. [9] McCarty, b. T., N. 3. Sridharan, and B. C. Sangster. 1979. "The Implementation of TAXMAN II: An Experiment in Artificial Intelligence and Legal Reasoning". Rutgers University Report #LCSR-TR-3. [10] Sangster, B. C. 1979a. "An Automatically Cempilable Hierarchical Definition Marcher". Rutgers University Report #LRP-TR-3. [11] Sangster, B. C. 1979b. "An Overview of an Automatically Compilab]e Hierarchical Definition Hatcher". Promeedln~fthe TJCAI-7q. [12] Sridharan, N. S. 1978a. (Ed.) "AIMDS User Manual, Version 2." Rutgers University Report #CBM-TR-89. [13] Sridharan, N. S. 1978b. "Some Relationships between BELIEVER and TAXMAN". Rutgers University Report #LCSR-TR-2. [14] Srinivasan, C. V. 1976. "The Architecture of Coherent Information System: A General Problem 3olving System". T~E Trana~tion~on~, VOl. 25, pp. 390-402. [15] Touretzky, D. 1978. "Learning from Examples in a Frame-Based System". Rutgers University Report #CBM-TR-87. [16] Woods, W. A. 1975. "What's in a Link: Fot~ldations for Sema/ltio Networks". In Renresentation Under~tAndinl, ed. by D. G. Bobrow and A. Collins. Academic Press. 49
1979
11
A SNAPSHOT OF KDS A KNOWLEDGE DF_,LIVERY SYSTEM James A. Moore end William C. Mann USCIlnformaUon Sciences Institute Marina del Ray, CA June, 1979 SUMMARY KDS Is a computer program which creates multl-par~raph, Natural Language text from a computer representation of knowledge to be delivered. We have addressed a number of Issues not previously encountered In the generation of Natural Language st the multi-sentence level, vlz: ordering among sentences and the scope of each, quality comparisons between alternative 8~regations of sub-sententJal units, the coordination of communication with non-linguistic activities by • gcel-pursuin~ planner, end the use of dynamic models of speaker and hearer to shape the text to the task at hand. STATEMENT OF THE PROBLEM The task of KDS is to generate English text under the following constraints: 1. The source of information Is a semantic net, having no a priori structuring to facilitate the outputtlng task. This represents the most elaborate performance of KDS to date. SYSTEM DESIGN The KDS organization reflects our novel paradigm: FRAGMENT- AND-COMPOSE. KDS decomposes the original network into fragments then orders and 8~regatas these according to the dictates of the text-producing task, not according to the needs for which the internal representation was originally conceived. KDS has shown the feasibility of this approach. The KDS organization Is a simple pipeline: FRAGMENT, PLAN, FILTER, HILL-CLIMB, and OUTPUT. FRAGMENT transforms the selected portion of the semantic net into an unordered set of propositions which correspond, roughly, to minimal sentences. 2. The text is produced to satisfy an explicit goal held by the text generating system, which describes a desired cognitive state of the reader. 3. To achieve the desired state of the reader requires more than a single sentence. R ESULTS This is not the forum for a extensive analysis of our results; for details, see Mann and Moore [ 1979]. However, to communicate the flavor of what ~ve have accomplished--from the motivating goal: (WANTS SPEAKER (KNOWS HEARER F ldlE-ALARM-SCENE)) and about two pages of formal propositions describing the "Fire-alarm scene', KDS generated the following: W Aeneeor there is a ]'ire, the ~arm s~jttm is started whic~ Jounds the a/arm bell and starts the timer. W~n it b ninety seconds a[tor t~t timer L~ started, unless tAe o/arm J~$tem is cancelled it co~IS Wells Farfo. Wht~ Wells Far~ is called, ff CO~Is tat Fire Dept.. W t)en ~ou kear tkt o/arm bill or ~ou smell smoke, stop ¢utr~tMng. determine mheth,r there Is a firs and decide to permit the alarm J~stem or to cancel it. When ),ou dttermine mketker there Is a [ire. l[ t~ere iS, permit t~t alarm S~sttm; otherwise cancel it..W~en ~ou permit the alarm syst~, c~! the Fire Dept. if possible and [oilo~ tkt w~uatlon procedure. When ~ carroll tke elate s~)sttet, l[ it iS mote t~an n~ner~ seconds since the timer is started, tke alarm s.Tsttm e~ls Wells Fargo: ockormlse continue tmrrytldng. PLAN uses goal-sensitive rules to impose an ordering on this set of fragments. A typical planning rule is: "When conveying a scene in which the hearer is to identify himself with one of the actors, express ell propositions involving that actor AFTER those which do not, and separate these two partitions by a paragraph break'. FILTER, deletes from the set, ell propositions currently represented as known by the hearer. HILL-CLIMB coordinates two sub-activities: AGGREGATOR applies rules to combine two or three fragments into a single one. A typical aggregation rule is: "The two fragments 'x does A' and 'x does B' can be combin~! into a single fragment: 'x does A and B'". PREFERENCER evaluates each proposed new fragment, producing a numerical measure of its "goodness". A typical preference rule is: "When instructing the hearer, lncremm the accumulating measure by 10 for each occurrence of the symbol 'YOU'". HILL-CLIMB uses AGGREGATOR to generate new candidate sets of fregments, and PREFERENCER, to determine which new set presents the best one-step improvement over the current set. The objective function of HILL-CLIMB has been enlarged to also take into ecceunt the COST OF FOREGONE OPPORTUNITIES. This has drastically improved the initial performance, since the topology abounds wtth local maxima. KDS has used, at one time or another, on the order of 10 planning rules, 30 aggregation rules and 7 preference rules. 51 The aggregation and preference rules are directly analogoua to the capabilities of linguistic eempotence and performance, respectively. OUTPUT lsa simple (two pages of LISP) text generator driven by a context free grammar. ACKNOWLEDGMENTS The work reported here was supported by NSF Grant MCS- 76-07332. REFERENCES Levin, J. A., and Goldman, N. M., Process models of reference in context, I$I/RR-78o72, Information Sciences Institute, Marina del Re),, CA, 1978. Levin, J.A., and Moore, J.A., Dialogue Gamest mete- communication structures for natural bnguqe interaction, Co~ltive Science, 1,4, 1978. Mann, W. C., Moore, J. A., and Levin, J. A., A comprehension model for human dialogue, in Proo. IJCAI-V, Cambridge, MA, 1977. Mann, W.C., and Moore, J.A., Computer generation of multl-paraq~raph English text, in preparation. Moore, J. A., Levin, J. A., and Mann, W. C., A Gool-orianted model of human dialogue, AJCL microfiche 67, 1977. Moore, J.A., Communication as a problem-solviq activity, in preparation. 52
1979
12
The Use of Ooject-Specl flc Knowledge in Natural Language Processing Mark H. Bursteln Department of Computer Science, Yale University 1. INTRODUCTION it is widely reco~nlzed that the process of understandln~ natural language texts cannot be accomplished without accessin~ mundane Knowledge about the world [2, 4, 6, 7]. That is, in order to resolve ambiguities, form expectations, and make causal connections between events, we must make use of all sorts of episodic, stereotypic and factual knowledge. In this paper, we are concerned with the way functional knowledge of objects, and associations between objects can be exploited in an understandln~ system. Consider the sentence (1) Jonn opened the Oottle so he could pour the wine. Anyone readin~ this sentence makes assumptions about what happened which go far beyond what is stated. For example, we assume without hesitation that the wine beln~ poured came from inside the bottle. Although this seems quite obvious, there are many other interpretations wnlcn are equally valid. Jonn could be fillin~ the bottle rather than emptyln~ the wine out of it. In fact, it need not be true that the wine ever contacted the bottle. There may have been some other reason Jonn had to open the bottle first. Yet, in the absence of a larger context, some causal inference mechanism forces us (as human understanders) to find the common interpretation in the process of connecting these two events causally. In interpreting this sentence, we also rely on an understanding of what it means for a bottle to be "open". Only by usin~ Knowledge of what is possible when a bottle Is open are able we understand why John had to open the Pottle to pour the wine out of It. Stron~ associations are at work here nelpin~ us to make these connections. A sentence such as (2) John closed the bottle and poured the wine. appears to be self contradictory only because we assume that the wine was in the bottle before applyln~ our knowledge of open and closed bottles to the situation. Only then do we realize that closing the bottle makes it impossible to pour the wine. Now consider the sentence (3) John turned on the faucet and filled his glass. When reading this, we immediately assume that John filled his glass with water from the faucet. Yet, not only is water never mentioned in the sentence, there is nothing there to explicitly relate turning on the faucet and filling the glass. The glass could conceivably be filled with milk from a carton. However, in the absence of some greater context which forces a different interpretation on us, we immediately assume that the glass is being filled with water from the faucet. Understanding each of these sentences requires that we make use of associations we have In memory between oPJects and actions commonly InvolvlnE those objects, as • This wore was supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under contra:t N0001~-75-C-1111. well as relations between several different objects. This paper describes a computer program, OPUS (Object Primitive Understanding System) which constructs a representation of the meanings of sentences such as those above, including assumptions that a human understander would normally make, by accessin~ these types of associative memory structures. This stereotypic knowledge of physical objects Is captured in OPUS using Object Primitives [5]. Object Prlmitlves (or) were designed to act in conjunction with Scnank's conceptual dependency representational system [11]. The processes developed to perform conceptual analysis in OPUS involved the integration of a conceptual analyzer similar to RlesOec~'s ELl [g] with demon-like procedures for memory interaction and the introduction of object-related inferences. 2. OBJECT PRIMITIVES The primary focus In this research has been on the development of processes which utillze Information provided by Object Primitives to facilitate the "comprehension" of natural language texts by computer. That Is, we were primarily concerned with the introduction of stereotyplc knowledge of objects into the conceptual analysis of text. By encoding information in OP descriptions, we were able to increase the interpretive power of the analyzer in order to handle sentences of the sort discussed earlier. What follows Is a brief description of the seven Object Primitives. A more thorough discussion can be found in [5]. For those unfamiliar with the primitive acts of Schank's conceptual dependency theory, discussions of wnlch can be found in [10,11]. The Object Primitive CONNECTOR Is used to indicate classes of actions (described in terms of Sohank*s primitives acts) which are normally enabled by the object being described. In particular, a CONNECTOR enables actions between two spatial regions. For example, a window and a door are both CONNECTORs which enable motion (PTRANS) of objects through them when they are open. In addition, a window Is a CONNECTOR which enables the action ATT£ND eyes (see) or MTRANS (acquisitlon of Information) by the instrumental action AI"rEND eyes. These actions are enabled regardless of whether the window is open or closed. That Is, one can see through a window, and therefore read or observe things on the other side, even when the window is closed. In the examples discussed above, the open bottle ls glven a CONNECTOR description, rnis will be discussed further later. A SEPARATOR disenables a transfer between two spatial regions. A closed door and a closed window are both SEPARATORs which dlsenable the motion between the spatial regions they adjoin. In addition, a closed door is a SEPARATOR which dlsenables the acts MTRANS by A~END eyes (unless the door is transparent) or ears. That Is, one is normally prevented from seeing or hearing through a closed door. Similarly, a closed window is a SEPARATOR which dlsenables MTRANS with Instrument ATTENO ears, although, as mentioned aoove, one can still see through a closed window to the other side. A closed bottle is another example of an object with a SEPARATOR description. It should be clear by now that objects de,bribed using Object Primitives are not generally described by a single primitive. In fact, not one out several sets of 53 primitive descriptions may be required. This Is illustrated above by the combination of CONNECTOR and SEPARATOR descriptions required For a closed window, while a somewhat different set Is required For an open window. These sets of descriptions form a small set of "states" which the object may Oe in, each state corresponding to a set of inferences and asSociations approriate to the object in that condition. A SOURCE description indicates that a aajor function of the object described is to provide the user of that object with some other object. Thus a Faucet is a SOURCE o[ water, a wtne bottle ls a SOURCE of wine, and a lamp is a SOURCE of the phenomenon called light. SOURCEs often require some sort of activation. Faucets must be turned on, wine bottles must be opened, and lamps are either turned on or lit depending on whether or not they are elsctrJo. The Object Frlmltlve CONSUMER Is used to describe objects whose primary Function Is to cons, me other objects. A trash can is a CONSUMER of waste paper, a draln is a CONSUMER of liquids, and a mailbox ts a CONSUMER of mail. Some objects are both SOURCEs and CONSUMERS. A pipe is a CONSUMER of tobacco and a SOURCE of smoke. An Ice cube tray Is a CONSUMER of water and a SOURCE of ice cu~es. Many objects can be described In part by relationships that they assu~e with some other objects. These relations are described ustn~ the Object Primitive RELATZONAL. Containers, such as bottles, rooms, cars, etc., have as part of their descriptions a containment relation, which may specify defaults For the type of object contained. Objects, such as tables and chairs, wnloh are commonly used to support other objects will be described with a support relation. Objects such as buildings, cars, airplanes, stores, etc., are all things which can contain people. As such, they are often distinguished by the activities which people in those places engage in. One important way OF encoding those activities is by referring to the scripts which describe them. The Object Primitive SETTING is used to capture the asscclatlons between a place and any script-like activities that normally occur there. It can also be used to indicate other, related SETTINGs which the object may be a part of. For example, a dinin~ car has a SETTING description wlth a llnK both to the restaurant script and to the SETTING For passenger train. This information Is important For the establishment OF relevant contexts, giving access to many domain specl/lc expectations which wlll subsequently be available to guide processtn~ ~oth during conceptual analysis of lexical input and when making InFerences at higher levels oF nogntttve processing. The Final Object Primitive, GESTALT, is used to characterize objects which have recognizable, and separable, eubparts. " Trains, hi-Fi systems, and Kitchens, all evoke Images of objects charaoterlzable by describing their subparts, and the way that those subparts relate to fOrm the whole. The OcJect Primitive GESTALT is used to capture this type of description. Using thls set of prlmltlves as the Foundation For a memory representation, we can construct a more general hi-directional associative memory by introducing some associative links external to object primitive decompositions. For example, the conceptual description of a wine bottle will Include a SOURCE description For a bottle where the SOURCE output is specified as wine. This amounts to an associative link From the concept OF a wine bottle to the concept of wine. But how can we construct an assoolatlve llnK From wlne back to wlne bottles? ~lne does not have an object primitive decomposition wnloh involves wine bottles, so we must resort to some construction which Js external to object primitive decompOsitions. Four associative links have been proposed [5], each of which pOints to a particular object primitive description. For the problem of wine and wine Dottles, an associative OUTPUTFROH link is directed from wlne to tne SOURCE description of a wine bottle. This external link provides us with an associative link From wine to wine bottles. 3. I~U~ROORAM I will now describe the processing ot two sentences very similar to those discussed earlier. The computer program (OPUS) which performs the Following analyses was developed usin~ a con:eptual analyzer written by Larry Eirnbaum [1]. OPUS was then extended to include a capacity For setting up and Firing "demons" or .triggers" as they are called In K~L [3]. The Functioning of these demons will be Illustrated below. 3.1 THE INITIAL ANALXSIS In the processing of the sentence "Jo~n opened the bottle so he could pour the wine," the phrase "John opened the bottle," is analyzed to produce the Followin~ representation: SJohne : eDOe result ehottlee CONNECTOR ENABLES ?HUMO <:> PTRANS ~- ?OBJ <--~>-- ?X L. < (INSIDE SELF) (or) > (INSIDE SELF) r- PTRANS <- ?OBJ <-~ ?HUMO <=> L- < ?¥ (or) ?HUMO <=> A'r'rzSD <. ?S£NS£ <--E~ ?OBJe • (where ?OBJ Is inside SELF) Here 3ELF refers to the object bein~ described (the bottle) and ?--- indicates an unfilled slot. eJohne here stands For the internal memory representation For a person wlth the name John. Memory tokens rot John and the bottle are constructed by a general demon which is trtg&ered during conceptual analysis whenever a PP (the internal representation For an object) is Introduced. OF descriptions are attached to each object token. This dtagrem represents the assertion that John did something which caused the bottle to assume a state where its CONNECTOR description applied. The CONNECTOR description indicates that something can be removed from the bottle, put into the bottle, or Its contents can be smelled, looked at, or generally examined by some sense modsltty. This CONNECTOR description Is not part oF the definition of the word 'open'. It is specific Knowledge that people have about what it means to say that a ~ottle IS open. In striving at the ~bove representation, the program must retrieve From memory this OF description of what it means For a bottle to be open. This information is stored Peneath its prototype For bottles. Presumably, there Is also script-like information about the different methods For opening bottles, the different types of caps (corks, twist-off, ...), and which method is appropriate For which cap. However, For the purpose of understanding a text which does not re/er to a specific type of bottle, asp, or opentn~ procedure, what is important is the information aoout how the bottle can 54 then be used once it is opened. This is the kind of knowledge that OOJect Primitives were designed to capture. When the analyzer builds the state description of the bottle, a general demon associated with new state descriptions is triggered. This demon is responsible for updating memory by adding the new state information to the token in the ACTOR slot of the state description. Thus the bottle token is updated to include the gtven CONNECTOR description. For the purposes of this program, the bottle is then considered to be an "open" bottle. A second function of this demon is to set up explicit expectations for future actions based on the new information. In this case, templates for three actions the program might expect to see described can be constructed from the three partially specified conceptualizations shown above In the CONNECTOR description of the open bottle. These templates are attached to the state descrJptlon as possible consequences of that state, for use when attempting to infer the causal connections between events. 3.2 CONCEPT DRIVEN INFERENCES The phrase "so ne could pour the wine." Is analyzed as eJohn~ ~.> enable PTRANS <- ewinee <~_>F ?X i < (INSIDE ?CONTAINER) When thls representation is built by the analyzer, we do not know that the the wine being poured came from the previously mentioned bottle. This inference Js made in the program by a slot-filling demon called the CONTAINER-FINDER, attached to the primitive act PTRANS. The demon, triggered when a PTRANS from Inside an unspecified container is built, looks on the iist of active tokens (a part of snort term memory) for any containers that might be expected to contain the substance moved, in this case wine. This is done by applying two tests to the objects In snort term memory. The first, the DEFAULT-CONTAINMENT test, looks for objects described by the RELATIONAL primitive, indicating that they are containers (link = INSIDE) with default object contained being wine. The second, the COMMON-SOURCE test, looks for known SOURCEs of wine by following the associative OUTPUTFROM link from wlne. If either of these tests succeed, then the object found is inferred to be the container poured from. At dlfferent times, either the DEFAULT-CONTAINMENT test or the COMMON-SOURCJ~ test may be necessary in order to establish probable containment. For example, i t is reasonable to expect a vase to contain water since the RELATIONAL description of a vase has default containment slots for water and flowers. But we do not always expect water to come from vases since there is no OUTFUTFROM link from water to a SOURCE description of a vase. If we heard "Water spilled when John bumped the vase," containment would be established by the DEFAULT-CONFAINMENT test. AssoclatJve links are not always hi-directional (vase ---> water, but water -/-> vase) and we need separate mechanisms to trace links with different orlentatlons. In our wine example, the COMMON-SOURCE test Is responsible for establishing containment, since wine is known to be OUTPUTFROM bottles but bottles are not always assumed to hold wine. Another inference made during the initial analysis finds the contents of the bottle mentioned in the first clause of the sentence. Thls expectation was set up by a demon called the CONTENTS-FINDER when the description of the open bottle, a SOURCE with unspecified output, was built. The demon causes a search of STM for an object which could De OUTPUT-FROM a bottle, and the token for this particular bottle is then marked as being a SOURCE of that oCject. The description of this particular bottle as a SOURCE of wine Is equivalent, in Object Primitive terms, to sayin~ that the bottle is a wine bottle. 3.3 CAUSAL VERIFICATION Once the requests trying, to fill slots not filled during the initial analysis nave been considered, the process which attempts to find causal connections between conceptualizations is activated, in this particular case, the analyzer has already indicated that the appropriate causal link is enablement. In ~eneral, however, the lexical information which caused the analyzer to build this causal llng is only an lndJcatlon that some enabling relation exists between the two actions (opening the bottle and pouring the wine). In fact, a long causal cnaJn may Oe required to connect the two acts, with an enaClement link being only one link in that chain. Furthermore, one cannot always rely on the text to indicate where causal relationships exist. The sentence "John opened the bottle and poured the wine." must ultimately be Interpreted as virtually synonymous with (1) above. The causal verification process first looks for a match between the conceptual representation of the enabled action (pouring the wine), and one of the potentially enabled acts derived earlier from the OP descrJptlon of the opened oottle. In this ex&mple, a match is immediately found between the action of pourln~ from the bottle and tne expected action generated from the CONNECTO~ descrJptlon of the open bottle (PTRANS FROM (INSIDE PART SEL~)). Other Object Primitives may also lead to expectations for actions, as we snail see later. When a match Js found, further conceptual checks are made on the enabled act to ensure that the action described "makes sense" with the particular objects currently fJlllng the slots In that acts description. When the match Is based on expectations derlved from the CONNECTO~ description of a container, the check Is a "contalner/contents check," which attempts to ensure that the object found in the container may reasonably be expected to be found there. The sentence "John opened the bottle so ne could pull out the elephant", is peculiar because we no associations exist wnlch would lead us to expect that elephants are ever found in bottles. The strangeness of this sentence can only be explained by the application of stereotypic knowledge about what we expect and don't expect to find inside a bottle. The contalner/contents cnecK is similar to the test described above In connection with the CONTAINER-FINDER demon. That is, the bottle is checked by both the DEFAULT-CONTAINMENT test and the COMMON-SOURCE test for known links relatin~ wlne and botles. When this check succeeds, the enable llnk has been verified by matcnlng an expected action, and by checking restrictions on related objects appearing intne slots of that action. The two CD acts that matched are then merged. The merging process accomplishes several tnJn~s. First, it completes the linking of tne causal chain between tne events described in the sentence. Second, it causes the filling of empty slots appearing in either the enabled act or In the enabling act, wherever one left a slot unspecified, and the other had that slot filled. These newly filled slots can propagate back along the causal chaln, as we shall see in the example of the next section. 55 3.~ CAUSAL CHAIN CONSTRUCTION In processin~ the sentence (~) John turned on the faucet so he could drinK. the causal chain cannot be built by a direct match with an expected event. Additional inferences must he made to complete the chain between the actions described in the sentence. The representation produced by the conceptual analyzer for "John turned on the faucet," Is *John* <~> *ooe ]J~ result Sfaucet e ~ (SOURCE with OUTPUT • ~water e) As with the bottle in the previous example, the description of the faucet as an active SOURCE of water is based on information found beneath the prototype for faucet, descrlbLnE the "on" state for that object. The principle e~pectatlon for SOURCE objects is that the person ~o "turned on" the SOURCE object wants to take control of (and ultimately make use of) whatever it is that Is output from that SOURCE. In CD, this is expressed by a template for an ATRANS (abstract transfer) of the output object, in this case, water. An important side effect of the construction of this expectation is that a token for some water is created, which can be used by a slot-filling Inference later. The representation for "he could drink" Is partially described ~y an INGEST with an unspecified liquid in the OBJECT slot. A special request to look for the missing liquid Is set up ~y a demon on the act INGEST, similar to the one on the PTRANS in the previous example. This request finds the token for water placed In the short term mamory ~nen the expectation that someone would ATRANS control of some water was generated. • faucet* ~ (SOURCE with OUTPUT = *watere) III ,. (possible enaOled action) HI ;i,1" "El ?HUMO ?HUMO <=> ATRANS <- ewatere < The causal chain completion that occurs for thls sentence is somewhat more complicated than It was for the previous case. As we nave seen, the only expectation set up by the SOURCE description of the faucet was for an ATRANS of water from the faucet. However, the action that is described here is an INGEST with Instrumental FTRANS. When the chain connector rails to find a match between the ATRANS and either the INGEST or its instrumental PTRANS, inference procedures are called to ~enerate any oOvlouS intermediate states that might connect these two acts. The first inference rule that is applied Is the resultatlve inference [8] that an ATRANS of an object TO someone results in a state where the object Is possessed by (POSS-BY) that person. Once this state has been ~enerated, it is matched a~alnst the INGEST in the same way the ATRANS was. When this match fails, no further forward inferences are ~enerated, since possession of water can lead to a wide ran~ e of new actions, no one of wnich is strongly expected. The backward chaining Inferencer Is then called to generate any ~nown preconditions for the act INGEST. The primary precondition (causative inference) for drinking is that the person doing the drinking has the liquid which ~e or she Is about to drink. This inferred enaolln~ state is then found to match the state (someone possesses water) Inferred from the expected ATRANS. The =arch completes the causal cnaln, causing the merging of the matched concepts. In this case, the mergln~ process causes the program to infer that it was procaoly John who took (AT~ANSed) the water from the faucet, in addition to turning it on. Had the sentence read "John turned on the faucet so .Mary could drlnK."p the program would infer that Mary took the water from the faucet. efaucete ~ (SOURCE with OUTPUT = ewatere) i enable ?HUMO ATRANS (- ewater • TO ?HUGO result °watere (POSS-B¥ ?HUHO) \ match? yes...lnfer ?HUMO • mJonnJ --~ewatere q~ (POSS-B~ mJohnO) bacgwar~J inference ,I~, enable L ..tJohnl <.> INGEST <- ?LIQUID ~ inst OJonne <=> PTRANS <- ?LIQUID One should note hers that the additional inferences used to complete the causal chain were very basic. The primary connections came directly from oOJect-specific expectatlons derived from the OOject Primitlve descriptions of the objects Involved. 4. C ~ It ta important to understand how OPUS differs from previous inference strateKies in natural language processing. To emphasize the original contributions of OPUS we will compare it to Rie~er's early work on inference and causal chain construction. Since Rie~er*s research is closely related to OPUS, a comparison of this system to Rieger's pro;rum will illustrate which aspects of OPUS are novel, and which aspects have been inherited. There is a ~reat deal of similarity between the types of inferences used In OPUS and those used by Rte~er in his description of Mt~qORX [8]. The causative and resultative inferences used to complete the causal chain in our last example came directly from that work. In addition, the demons used by OPUS are similar in flavor to the forward inferences and specification (slot-filling) inferences described by Rieger. Expectations are explicitly represented here as they were there, allowing them to be used In more then one way, as In the case where water is inferred to be the ~/Gg~Ted liquid solely from its presence in a previous expectation. There are, however, two ways in which OPUS departs from the inference strategies of Mb~OR¥ In significant ways. (1) On one the level of computer implementation there is a reorganization of process control in OPUS, and (2) on a theoretical level OPUS exploits an additional representatLonal system which allo~m inference generation to be more stronBly directed and controlled. In terms of implementation, OPUS integrates the processes of conceptual analysis and memoryohased inference prooeantnB. By using demons, inferences can be made during conceptual analysis, as the conceptual memory representations are ~enerated. This eliminates much of the need for an inference discrimination procedure aoting on completely pre-analyzed comoeptuaiizations produced Py a separate program module. In ,~tOR~, the processes of conceptual analysis and inference ~sneration were sharply modularized for reasons which were more pragmatic than theoretical. ~ough is Known about the interactions of analysis and inference at this time for us to approach the two as 56 concurrent processes which share control and contribute to each other In a very dynamic manner, ideas from KRL [3] were Instrumental In desJgnJn~ an integration of previously separate processing modules. On a more theoretical level, the Inference processes used for causal chain completion Jn OPUS are more highly constrained than was possible in Rle~er's system. In MEMORY, all possible inferences were made for each new conceptualization which was input to the program. Initially, input consisted of concepts coming from the parser. MEHORX then attempted to sake inferences from the conceptualizations which it itself had produced, repeating this cycle until no new inferences could be ~enerated. Causal chains were connected ~nen matches were found between inferred concepts and concepts already stored In Its ~emory. However, the Inference mecnanlsms used were in no way dlrected speclflcally to tne task of making connections between concepts found In its Input text. This lead to a comblnatorlal explosion in the number of inferences made from each new input. In OPUS, forward expectations are based on specific associations from the objects mentioned, and only when the objects in the text are described in a manner that indicates they are being used functionally. In addition, no more than one or two levels of forward or backward Inferences are made before the procedure Is exhausted, the system stops once a match Is made or It runs out of highly probable inferences to make. Thus, there is no chance for the ~Jnds of comblnatorlal explosion Rieger experlenced. By strengthenln~ the representation, and exploiting an integrated processing strategy, the comblnatorJal explosion problem can be eliminated. OPUS makes use of a well structured set of memory associations for objects, the Object Primitives, to encode Information which can be used in a variety of Rleger's qeneral inference classes. Because this Information is directly assoclated with memory representations for the objects, rather than being embodied Jn disconnected inference rules elsewhere, appropriate Inferences for the objects mentioned can be found directly. By using this extended repressntatlonai system, we can begin to examine the kinds of associative memory required to produce what appeared from Rieger's model to ~e the "tremendous amount of 'hidden' computation" necessary for the processing of any natm'al language text. REFERENC£S [11 Blrnbaum, L., and Selfrldge M. (1978). On Conceptual Analysis. (unpublished) Yale University, New Haven, CT. [2] Bobrow, D. G., Kaplan, R.M., Kay, M., Norman, D.A., Thompson, H., and Winograd, T. (1977). GUS, a frame driven dialog system, Artificial Intelligence, Vol. 8, No. 1. [31 Bobrow, D. G., and Wlnograd, T. (1977). An overview of KRL, a Knowledge representation language. Co=nltive Science 1, no. 1 [~] Charntak, E. (1972). Toward a model of childrens story comprehension. AITR-266, Artificial Intelligence Laboratory, MZT, Cambridge, HA. Lehnert, W.G. (1978). Representing physical objects in memory. Technical Report #111. Dept. of Computer Science, Yale University, New Haven, CT. C6] Minsky, M. (1975). A framework for representing Knowledge. In Winston, P. H., ed., The~1.~JZg~L~ of C~Dutar Vlslon, McGraw-Hill, New York, NY. C71 C81 C91 Norman, D. A., and Rumelhart, D. £., and the LNR Research Group (1975) ExDlorationslnCo=nltton. W. H. Freeman and Co., San granslsco. Rleger, C. (1975). Conceptual memory. Zn R. C. Schank, ed., Concectual Prdceasinm. North Holland, Amsterdam. Rlesbeok, C. and Schank, R. C. (1976). Comprehension by computer: expectation-baaed analysis of sentences in context. Technical Report #78. Dept. of Computer SCience, Yale University, New Haven, CT. [10] 3ohank, R.C., (1975). Conceptual Dependency Theory. in Schank, R. C.(ed.), Processinl. North Holland, Amsterdam. [111 5ohank, R. C. and Abelson, R. P. (1977). ~criots, Plans, ~oals, ~ Understandtn¢. Lawence Rrlba ,,m Press, Hlllsdale, NJ. 57
1979
13
H~ADING WITH A PURPOSE Michael Lebowitz Department of Computer Science, Yale University 1. iNTRODUCTION A newspaper story about terrorism, war, politics or football is not likely to be read in the same way as a gothic novel, college catalog or physics textbook. Similarly, tne process used to understand a casual conversation is unlikely to be the same as the process of understanding a biology lecture or TV situation comedy. One of the primary differences amongst these various types of comprehension is that the reader or listener will nave different goals in each case. The reasons a person nan for reading, or the goals he has when engaging in conversation wlll nave a strong affect on what he pays attention to, how deeply the input is processed, and what information is incorporated into memory. The computer model of understanding described nere addresses the problem of using a reader's purpose to assist in natural language understanding. This program, the Integrated Partial Parser (IPP) ~s designed to model the way people read newspaper stories in a robust, comprehensive, manner. IPP nan a set of interests, much as a human reader does. At the moment it concentrates on stories about International violence and terrorism. IPP contrasts sharply wlth many other tecnniques which have been used in parslng. Most models of language processing have had no purpose in reading. They pursue all inputs with the same dillgence and create the same type of representation for all stories. The key difference in IPP is that it maps lexlcal input into as high a level representation as possible, thereby performing the complete understanding process. Other approaches have invariably first tried to create a preliminary representation, often a strictly syntactic parse tree, in preparation for real understandlng. ~ince high-level, semantic representations are ultimately necessary for understanding, there is no obvious need for creating a preliminary syntactic representation, which can be a very difficult task. The isolation of the lexlcal level processing from more complete understanding processes makes it very difficult for hlgn level predictions to influence low-level processing, which is crucial in IPP. One very popular technique for creating a low-level representation of sentences has been the Augmented Transition NetworX (ATN). Parsers of this sort have been discussed by Woods [ 11] and Kaplan [SJ. An ATN-IiKe parser was developed by Winograd [10]. Most ATN parsers nave dealt primarily wltn syntax, occasionally checking a" few simple semantic properties of words. A more recent parser wnicn does an isolated syntactic parse was created by Marcus [4]. TOe important thing to note about all of these parsers is that they view syntactic parsing as a process to be done prior to real understanding. Even thougn systems of this sort at times make use of semantic information, they are driven by syntax. Their ~oal of developing a syntactic parse tree is not an explicit part of the purpcse of human understanding. the type of understanding done by IPP is in some sense a compromise between the very detailed understanding of This work was supported in part by the Advanced Research 8roJects A~enoy of the Department of Defense and monitored under the Office of Naval Research under contract N00014-75-C-1111. SAM Ill and P~M [9], both of which operated in conjunction with ELI, Riesbeck's parser [SJ, and the skimming, highly top-down, style of FRUMP [2]. EL1 was a semantically driven parser which maps English language sentences into the Conceptual Dependency [6] representations of their meanings, it made extensive use of the semantic properties of the words being processed, but interacted only slightly with the rest of the understanding processes it was a part of. it would pass o f f a completed Conceptual Dependency representation of each sentence to SAM or PAM which would try to incorporate it into an overall story representation. BOth these programs attempted to understand each sentence fully, SAM in terms of scripts, PAM in terms of plans and goals, before going onto the next sentence. (In [~] Scnank and Abelson describe scripts, plans and goals.) SAM and PAM model the way people might read a story i f they were expecting a detalied test on it, or the way a textbook might be read. £acn program's purpose was to get out of a story every piece of informatlon possible, fney treated each piece of every story as being equally important, ~nd requiring total understanding. Both of these programs are relatively fragile, requiring compiex dictionary entries for every word they might en0ounter, as well as extensive Knowledge of the appropriate scripts and plans. FRÙMP, in contrast to SAM and rAM, is a robust system whlcn attempts to extract the amount of information from a newspaper story which a person gets when ne skims rapidly. It does this by selecting a script to represent the story and then trying to fill in the various slots which are important to understand the story. Its purpose is simply to obtain enough information from a story to produce a meaningful summary. FRUMP is strongly top-down, and worries about incoming information from the story only insofar ~s it helps fill In the details of the script which it selected. 50 wnile FRUMP is robust, simply skipping over words it doesn't Know, it does miss interesting sections of stories which are not explained by its initial selection of a script. 18P attempts to model the way people normally read a newspaper story. Unlike SAM and PAH, it does not care if it gets every last plece of information out of a story. Dull, mundane information is gladly ignored. But, In contrast with FRUMP, it does not want to miss interesting parts of stories simply because tney do not mesh with initial expectations. It tries to create a representation which captures the important aspects of each story, but also tries to minimize extensive, unnecessary processing which does not contrlbute to the understanding of the story. Thus IFP's purpose is to decide wnat parts of a story, if any, are interesting (in IPP's case, that means related to terrorism), and incorporate the appropriate information into its memory. The concepts used to determine what is interesting are an extension of ideas presented by SctmnK [7]. 2. How l~ EOA~s The ultimate purpose of reading a newspaper story is to incorporate new information into memory. In order to do this, a number of different Kinds of Knowledge are needed. The understander must Know the meanings of words, llngulatic rules about now words combine into sentences, the conventions used in writing newspaper 5g stories, and, crucially, have extensive knowledge about the "real world." It is impossible to properly understand a story without applying already existing knowledge about the functioning of the world. This means the use of long-term memory cannot be fruitfully separated from other aspects of the natural understandin~ problem. The mana~emant of all this information by an understander is a critical problem In comprehension, since the application of all potentially relevant Knowledge all the time, would seriously degrade the understandin~ process, possibly to the point of halting It altogether. In our model of understanding, the role played by the interests of the understander Is to allow detailed processing to occur only on the parts of the story which are Important to overall understanding, thereby conserving processing resources. Central to any understandin~ system is the type of Knowledge structure used to represent stories. At the present time, IPP represents stories in terms of scripts similar to, although simpler than, those used by SAM and FRUMP. Most of the co--on events In IPP's area of Interest, terrorism, such as hiJaokings, kidnappings, and ambushes, are reasonanly stereotyped, although not necessarily wltn all the temporal sequencing present in the scripts SAM uses. ZPP also represents some events directly In Conceptual Dependency. The representations in IPP consist of two types of structures. There are the event structures themselves, generally scripts such as $KIDNAP and SAMBUSH, which form the backbone of the story representations, and tokens which fill the roles in the event structures. These tokens are basically the ?tcture Producers of [6], and represent the concepts underlying words such as "airliner," "machine-gun" and "Kidnapper." The final story representation can also Include links between event structures indicating causal, temporal and script-scene relationships. Due to IPP's limited repertoire of structures with which to represent events, it is currently unable to fully understand some stories which maXe sense only in terms of goals and plans, or other higher level representations. However, the understanding techniques used in IPP should be applicable to stories which require the use of such knowledge structures. This is a topic of current research. It Is worth noting that the form of a story's representation may depend on the purpose behind its being read. If the reader is only mildly Interested in the subject of the story, soriptal representation may well be adequate. On the other hand, for an story of great interest to the reader, additional effort may be expended to allow the goals and plans of the actors In the story to be gorked out. This Is generally more complex than simply representing a story in terms of stereotypical knowledge, and will only be attempted in cases of great interest. In order to achieve its purpose, ~PP does extensive "top-down" processing. That Is, It makes predlotions aOout what it is likely to see. These predictions range from low-level, syntactic predictions ("the next noun phrase will be the person kidnapped," for instance) to quite high-level, global predictions, ("expect to see demands made by the terrorist"). Significantly, the program only makes predictions about things it would like to Know. It doesn't mind skipping over unimportant parts of the text. The top-down predictions made by IPP are implemented in terms of requests, similar to those used by RiesbecK [5], which are basically Just test-action pairs. While such an implementation In theory allows arbitrary computations to ~e performed, the actions used in IPP are in fact quite limited. IPP requests can build an event structure, link event structures together, use a token to fill a role in an event structure, activate new requests or de-activate other active requests. The tests in IPP requests are also llmited in nature. They can look for certain types of events or tokens, check for words with a specified property in their dictionary entry, or even check for specific lexical items. The tests for lexical items are quite Important in Keeping IPP's processing efficient. One advantage is that very specific top-down predictions will often allow an otherwise very complex word disa~biguation process to be bypassed. For example, in a story about a hijacking, ZPP expects the word "carrying" to indicate that the passengers of the hijacked vehicle are to follow. So it never has to consider An any detail the meaning of "carrying." Many function words really nave no meaning by themselves, and the type of predictive processing used by IPP is crucial in handling them efficiently. Despite its top-down orientation, IPP does not ignore unexpected Input. Rather, If the new Information is interesting in itself the program will concentrate on it, makin~ new predictions In addition to, or instead of, the original ones. The proper integration of top-down and bottom-up processing allows the program to be efficient, and yet not miss interesting, unexpected information. The bottom-up processin~ of IPP is based around a ulassification of words that is done strictly on the basis of processing considerations. IPP Is interested in the traditional syntactic classifications only when they help determine how worqs should be processed. IPP's criteria for classification Involve the type of data structures words build, and when they should be processed. Words can build either of the main data structures used in XPP, events and tokens. The words bulldin~ events are usually verbs, but many syntactic nouns, such as • kidnapping," "riot," and "demonstration" also indicate events, and are handled in Just the same way as traditional verbs. Some words, such as =oat adjectives and adverbs, do not build structures but rather modify structures built by other words. These words are handled according to the type of structure they modify. The second criteria for classifying words - when they should be processed - is crucial to 1PP's operation. In order to model a rapid, normally paced reader, IPP attempts to avoid doin~ any processing which will not add to its overall understandin~ of a story. To do this, it classifies words into three groups - words which must be fully processed i--edlately, words which should be saved in short-ter~ memory, and then processed later, if ne,=essary, and words which should be skipped entirely. Words which must be processed immediately include interesting words building either event structures or tokens. "Gunmen," "kidnapped" and "exploded" are typical examples. These words give us the overall framework of a story, indicate how much effort should 0e devoted to further analysis, and, most importantly, generate the predictions w~loh allow later processing to proceed efficiently. The save and process later words are those which may become si~nifioant later, but are not obviously impor~cant when they are read. This class is quite substantial, Including many dull nouns and nearly all adjectives and adverbs. Zn a noun phrase sucn as "numerous Italian gunmen," there Is no point in processing tO any depth "numerous" or "Italian" until we ~now the word they modify is Important enou~n to be included in the final representation. Zn the cases where further procesein~ is necessary, IPP has the proper information to easily incorporate the saved words Into the story representation, and In the many cases 60 where the word is not important, no effort above saving the word is required. The processin~ strategy for these words is a Key to modei~n~ nom,al reading. The final class of words are those IPP skips altogether. Thls class includes very unlnterestln~ words whlch neither contribute processing clues, nor add to the story representation. Many function words, adjectives and verbs irrelevant to the domain at hand, and most pronouns fall into this category. These words can still be significant in cases where they are predlcted, but otherwise they are ignored by IPP and take no processln~ effort. In addition to the processing techniques mentioned so far, IPP makes use of several very pragmatic heuristics. These are particularly important in processlng noun ~roups properly. An example of the type of heuristic used is IPP's assumption that the first actor in a story tends to be important, and is worth extra processing effort. Other heurlst~cs can be seen in the example In section ~. IP~'s basic strategy is to make reasonable guesses about the appropriate representation as qulcKly as possible, facilitating later processln~ and fix things later if its ~uesses are prove to be wrong. ~. ~ DETAILED ~XAMPLE ~n order to illustrate bow IPP operates, and how its purpose affects its process|n{, an annotated run of IPP on a typical story, one taken from the Boston Globe is shown below. The text between the rows of stars has been added to explain the operation of IPP. Items beginning with a dollar sign, such as $rERRORISM, indicate scripts used by IPP to represent events. [PHOTO: Initiated Sun 24-Jun-79 3:36PM] @RUN IPP *(PARSE $1) Input: $1 (3 I~ 79) IRELAND (GUNMEN FIRING FROM AMBUSH SERIOUSLY WOUNDED AN 8-YEAR-OLD GIRL AS SHE WAS BEING TAKEN TO SCHOOL YESTERDAY AT STEWARrSTOWN COUNTY r~RONNE) Processing: GUNMEN : InterestinE token - GUNMEN Predictions - SHOOTING-WILL-OCCUR ROBBERY-SCRIPT TERRORISM-SCRIPT HIJACKING-SCRIPT lll**lem*llllll*l*mli,lll,l,lll,l,mllll,mlm,lllilmm,illl GUNMEN is marked In the dlotionary as inherently interesting. In humans this presumably occurs after a reader has noted that stories involving gunmen tend to be interesting. Since it is interesting, IPP fully processes GUNMEN, Knowing that it Is important to its purpose of extracting the significant content of the story, it builds a token to represent the GUNMEN and makes several predlctlons to facilitate later processing. There is a strong possibility that some verb conceptually equivalent to "shoot" will appear. There are also a set of scripts, including SROBBERY, STERRORISM and $HIJACK wnlcn are likely to appear, so IPP creates predictions looking for clues indicating that one of these scripts sOould be activated and used to represent the story. FIRING : Word satisfies prediction Prediction confirmed - SHOOTING-WILL-OCCUR Instantiated $SHOOT script 61 Predictions ° $SHOOf-HUL::-FINUER REASON-FOR-SHOOtING $SHoor-scEN~S tJeiIJ~i~Jf~mmQll~l|l#~Oilm~i~Ome|J|i~|~i~iQltllliJIDI FIHING satisfies the predlction for a "shoot" verb. Notice that tne prediction immediately dlsamblguates FIRING. Other senses of the word, such as "terminate employment" are never considered. Once IPP has confirmed an event, it builds a structure to represent i t , in this case the $SHOOr script and the token for GUNMEN is f i l l e d in ss the actor. Predictions are made trying to flnd the unknown roles of the script, VICTIM, in particular, the reason for the shooting, and any scenes of $SHOOT wnicn might be found. JJJiJJJJJiJiJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJlJJJJJJJJJJJJJ instantiated $ATTACK-P~RSON script Predictions - SAT rACK-PERSON-ROLE-FINDER. SATrACK-PERSON-SC~N~S Im,*|i@m|li,I@Wm~#mI~@Igm#wIiII#mmimmIII|@milIIillJgimR@ IPP does not consider the $SHOOT script to be a total explanation of a snootin~ event. It requires a representation wnlcn indicates the purpose of the various actors, in the absence of any other information, IPP assu~es people wno s h o o t are deliberately attacKin~ someone. So the SATTACK-PERSON script is Inferred, and $SHOOT attacned to i t as a scene. The SATTACK-PERSON representation allows IPP to make inferences which are relevant to any case of a person being attacked, not just snootin~s. IPP is still not able to Instantiate any of the high level scripts predicted by GUNMEN, since the SATTACK-PERSON script is associated with several of the~. FROM : Function word Predictions - FILL-FROM-SLOT Ji*JiJJeJ**JJJJiJJJJJJJlJJJJJJJJJ*JJJJ*JJJJ**J*JJJJJ*J*J FROM in s =ontext such as this normally indicates the location from which the attack was made is to follow, so IPP makes a prediction to that effect. However, since a word building a token does not follow, the prediction is deactivated. The fact that AMBUSH is syntactically a noun is not relevant, since iFP's prediction loo~s for a word which identifies a place. li*JiJJ*Jll**J*lJli|iJl*lii|llll#*J**JiJJiJJ**iJil*iiJJ* AMBUSH : Scene word Predictions - SAMBUSH-ROL~-FIND~R $AMBUSH-SCENKS Prediction confirmed - TERRORISM-SCRIPT Instantlated $TERRORISM script Predictions - TERRORIST-DEMANDS STERRORISM-ROLE-FINDER STERRORISM-SCENES COUNTER-MEASURES J*lJJJ*JiJJJJJJiJ*JJJJJJlJJJJJJJJJ*JJJi*JJ*JJJJ***JJJJ** IPP <nows the word AMBUSH to indicate an instance of the SAMBUSH scr|pt, and tn~t SAMBUSH can be a scene of $TERRORISM (i.e. it is an activity w~Ich can be construed as a terrorist act). This causes the prediction made by GUNMEN that $TERRORISM was a possible script tO be trlggerred. Even if AMBUSH had other meanings, or could be associated with other higher level scripts, the prediction would enable quicK, accurate identification and incorporation of the word's meaning into the story representation. IPP's purpose of associating the shooting with a nlgh level Knowledge structure which helps to expialn it, has been achieved. At this point in the processing an Instance of STERRORISM is constructed to serve as the top level representation of the story. The SAMBUSH and SATTACK-PERSON scripts are attached as scenes of STERRORISM. SgRIOUSLY : SKip and save ~OUNO£D : Word satisfies prediction Prediction confirmed - SWOUND-SCENE Predictions - SWOUND-ROLE-FINDER SWOUND-SCENES t~e~eoeeeleleeeeeeelloeelem|eee|eoeeeeaoalenlo|eleeoeeee SWOUND is a Known scene of $ATTACK-PERSON, representin~ a common outcome of an attack. It is instantlated and attached to $ATTACK-P~RSON. IPP infers that the actor of SWOUND is probably the same as for $A~ACK-PERSON, i.e. the GUNMgN. eleileleleeeelllllll|lllalllolsllieilllOlllelllel|oileil AN : SKip and save ~-YEAR-OLD : Skip and save GiRL : Normal token - GIRL Prediction confirmed - SWOUND-ROLE-FINDER-VICTIM eeee~eeeeeeme~eee~see~e~eee~m~ee~o~eeeeeeeeeee~aeeoee ~IRL Ouilds a toXen wnlch fllls t~e VICTIM role of the SWOUND script. Since IPP has inferred that the VICTIM of the ~ATrACK-PERSON and SSHOOr scripts are the same as the VICTIM of SWOUND, it also fills in those roles. Identifyin~ these roles is integral to IFP's purpose of understanding the story, since an attack on a person can only Oe properly understood if the victim is Known. As t~is person is important to the understandln~ of the story, IPP wants to acquire as much information as possible about net. Therefore, it looks baoK at the modifiers temporarily saved in short-term memory, 8-YEAR-OLD in this case, and uses them to modify the token ~uilt for GIRL. The age of the ~Irl is noted as eight years. This information could easily be crucial to appreciatin~ the interesting nature of the story. @EeE~eeBe@~oeeEeeeeeeeE~e~aEeeoaeEsasee|eaeeeeeeeeEssee AS : SKip SHE : SKip WAS : SKip and save BEING : Dull verb - skipped TAKEN : SKip TO : Function word SCHOOL : Normal token - SCHOOL Y~ST~RDAY : Normal token - YESTERDAY ~eee~ene~e~e~neeeeeaeeeeoeeeeeeeaeeeeeaeeeeeeeeeeeeeeee Nothin~ in this phrase is either inherently interesting or fulfills expectations made earlier in the processing of the story. So it is all prc,:essed very superficially, addin~ nothing to the final representation. It is important that IPP ma~es no attempt to dlsamOi~uate words such as TAKEN, an extremely complex process, since it knows none of the possible meanings will add significantly to its understanding. @illIIIIIIIIIIIIIIIIIIIIIIIllOIIlllIIIIIiilIIIIIIIIilIII AT : Function word STEWARTSTOWN : Skip and save COUNTY : SKip and save TYRONNE : Normal token - TYRONNE Prediction confirmed - $T~RRORISH-ROLE-FIHDER-PLACE emmtu~u~eeeeteHeJ~eee~t~e~eeeeatteet~aaeaaeaeeesewaa ST£WARTSTOWN COUNTY rYRONNE satisfies the ?redlotlon for the place where the terrorism took plane. IPP has inferred that all the scenes of the event took place at the same location. IPP expends effort in identifying this role, as location is crucial to the understandln~ of most storles. It is also important in the or~anizatlon of memories about stories. A incidence of terrorism in Northern ireland is understood differently from one in New York or Geneva. 62 Story Representation: ee MAIN [VENT ee SCRIPT $TERRORISM ACTOR GUNMEN PLACE $TEWARTSTOWN COUNTY TYRONNE TIHE ~ESTERDAY SCENES SCRIPT SAHBUSH ACTOR GUNMEN SCRIPT $ATTACK-PERSON ACTOR GUNMEN VICTIM 8 ~EAR OLD GIRL SCENES SCRIPT $SHOOT ACTOR GUNMEN VICTIM 8 XEAR OLD GIRL SCRIPT SWOUND ACTOR GUNMEN VICTIM 8 YEAR OLD GIRL EXTENT GREATERTHAN-nNORH e saesaeeeaeeeeseeeeeeeeeesseeesesesaeaeeoeeeeaeeeeeaeeeee IPP's final representation indicates that it has fulfilled its purpose in readimi the story. It has extracted roughly the same information as a person reading the story quickly. IPP has r~ognised an instance of terrorism oonststln8 of an ambush in whioh an eight year-old girl was wounded. That seems to be about all a person would normally remember from suoha story. eseeeeeeeeeae|eeeeeeesneeeeeaeeeeeeeeeeseeeeeeeaeeeeeese [PHOTO: Terminated Sun 24-jun-79 3:38~] As it pro~esses a story such as this one, IPF keeps track of how interesting it feels the story is. Novelty and relevance tend to increase interestlngness, while redundancy and irrelevance dec?ease it. For example, in the story shown moore, the faot that the victim of the shooting was an 8 year-old ingresses the interest of the story, and the the incident taMin~ place in Northern Ireland as opposed to a more unusual sate for terrorism decreases the interest. The story's interest Is used to determine how much effort should be expended in tryin~ to fill in more details of t~e story. If the level of lnterestingness decreases fax' enough, the program can stop processing the story, and look for a more interesting one, in the same way a person does when reading through a newspaper. ~. ANOTHER EXAMPLE The following example further illustrates the capabilities of IPP. In this example only IPP's final story representation is snows. This story was also taken from the Boston Globe. [PHOTO: Initiated Wed 27-Jun-79 I:OOPM] @RUN IPP °(PARSE S2) Input: S2 (6 3 79) GUATEMA~t (THE SON OF FORMER PRESIDENT EUGENIC KJELL LAUGERUD WAS SHOT DEAD B~ UNIDENTIFIED ASSAILANTS LAST WEEK AND A BOMB EXPLODED AT THE HOME OF A GOVERNMENT OFFICIAL ~LICE SAID) Story Representation: am MAIN EVENF ea SCRIPT STERRORISM ACTOR UNKNOWN ASSAILANTS SCENES SCRIPT $ATTACK-PERSON ACTOR UNKNOWN ASSAILANTS VICTIM SON OF PREVIOUS PRESIDENT EUGENIC KJELL LAUG~RUD SCENES SCRIPT $SHOOT ACTOR UNKNOWN ASSAILANTS VICTIM SON OF PREVIOUS PRESIDENT EUGENIC KJELL LAUGERUD SCRIPT SKill ACTOR UNKNOWN ASSAILANTS VICTIM SON OF PREVIOUS PRESIDENT EUGENIC KJELh LAUG~RUD SCRIPT SATTACK-PLAC£ ACTOR UNKNOWN ASSAILANTS PLACE HOME OF GOVERNMENT OFFICIAL SC~NdS SCRIPT $BOHB ACTOR UNKNONN ASSAILANTS PLACE HOME OF GOVERNMENT OFFICIAL [PHOTO: Terminated - Wed 27-Jun-79 I:09PM] Thls example maces several interesting points about the way IPP operates. Notice that 1PP has jumped to a conclusion about the story,, which, while plausible, could easily be wrong, it assumes that the actor of the SBOMB and SATTACK-PLACE scripts is the same as the actor of the STERRORISM script, which was in turn inferred from the actor of the sbootln~ incident. Tnls is plausible, as normally news stories are about a coherent set of events witn lo~Ical relations amongst them. So it is reasonable for a story to De about a series of related acts of terrorism, committed by the same person or ~roup, and tnat is what IPP assumes here even though that may not be correct. Uut this ~Ind of inference is exactly the Kind which IPP must make in order to do efficient top-down processln~, despite the possibility of errors. The otner interesting point about tnis example is the way some of iPP's quite pragmatic heuristics for processln~ give positive results. For instance, as mentioned earlier, the first actor mentioned has a stronz tendency to be important to the understandln~ of a story. In thls story that means that the modlfyin~ prepositional phrase "of former President Su~enlo Kjell Lau~erud" is analyzed and attached to the token built for "son," usually not an interesting word. Heur~stlcs of this sort ~ive IPP its power and robustness, rather than any single rule about language understandln~. 5. CONCLUSION IPP has been implemented on a DECsystem 20/50 at Yale. It currently has a vocabulary of more than I~00 words wnlcn is oelng continually Increased in an attempt to make the program an expert underst~der of newspaper stories scout terrorism. £t is also planned to add information about nigher level knowledge structures such as ~oals and plans and expand IPP's domain of interest. To date, IPP has successfully processed over 50 stories taken directly from various newspapers, many sight unseen. The difference between the powers of IPP and the syntactlcally driven parsers mentioned earller can cent be seen by the Kinds of sentences they handle. Syntax-0ased parsers generally deal with relatively simple, syntactically well-formed sentences. IPP handles sucn sentences, Out also accurately processes stories taken directly from newspapers, which often involve extremely convoluted syntax, and in many cases are not grammatical at all. Sentences of this type are difficult, if not impossible for parsers relyln~ on syntax. IPP is sole to process news stories quickly, on the order of 2 CPU seconds, and when done, it has achieved a complete understandln~ of the story, not Just a syntactic parse. As shown in tne examples above, interest can provide a purpose for reading newspaper stories. In other situations, other factors might provide the purpose. But the purpose is never simply to create a representation - especially a representation with no semantic content, such as a syntax tree. This is not to say syntax is not important, obviously in many circumstances it provides crucial information, but it should not drive the understanding process. Preliminary representations are needed only if they assist in the reader's ultimate purpose bulldln~ an appropriate, high-level representation which can be incorporated with already existing Knowledge. The results achieved by IPP indicate that parsing directly into high-level knowledge structures is possible, and in many situations may well be more practical than first doin~ a low-level parse. Its integrated approacn allows IPP to make use of all the various kinds of knowledge which people use when understandtn~ a story. References [1] Cullin&ford, R. ( 1 9 7 8 ) Script application: Computer understanding of newspaper stories. Research Report 116, Department of Computer Science, Yale University. [2] DeJon~, G.F. (19/9) Skimming stories in real time: An experiment in integrated understanding. Research Report 158, Department of Computer Science, Yale University. [3] Kaplan, R.M. (1975) On process models for sentence analysis, in D.A. Norman and D. E. R~elhart, ads., Explorations in ~oanition. W. H. Freeman and Company, San Francisco. [~] Marcus, M.P. (1979) A Theory of Syntactic Recognition for Natural Language, in P H . Winston and R.H. Brown (eds.), Artificial IntellJ~ence: an ,~ Presnectlve, HIT Press, Cambridge, Massachusetts. [5] Riesbeck, C. K. (1975) Conceptual analysis. In R.C. ScnanK (ed.),. ~ Information Processing. North Holland, Amsterdam. [6] Scnank, R.C. (1975) Conceotual Information Processln¢. North Holland, Amsterdam. [7] Scnank, R. C. (1978) Interestlngness: Controlling inferences. Research Report I~5, Department of Computer Science, Yale University. [8] Scbank, R. C. and Abelson, R. P. (1977) Scrints. Plans, Goals and Understanding. Lawrence grlbaum Associates, Rlllsdale, New Jersey. [9] dllensky, R. (1978) Understanding goal-based stories. Research Report I~0, Department of Computer Science, Yale University. [10] Wtnograd, T. (1972) Understandin~ Natural Lan:uafe. Academic Press, New York. [11] ~oods, W. A. (1970) Transition network grammars for natural language analysis. ~ o f the ACH. Vol. 13, p 591. 63
1979
14
DISCOURSE: CODES AND CLUES IN CONTEXTS Jane J. Robinson Artificial Intelligence Center SRI International, Menlo Park, California Some of the meaning of a discourse is encoded in its linguistic forms. Thls is the truth-conditional meaning of the propositions those forms express and entail. Some of the meaning is suggested (or 'implicated', as Grice would say) by the fact that the encooer expresses just those propositions in just those linguistic forms in just the given contexts [2]. The first klnd of meaning is usually labeled 'semantics'; it is decoded. The second Is usually labeled 'pragmatlcs'; it is inferred from clues provided by code and context. Both kinds of meaning are related to syntax in ways that we are coming to understand better as work continues in analyzing language and constructing processing models for communlcatlon. We are also coming to a better understanding of the relationship between the perceptual and conceptual structures that organize human experience and make it encodabla in words. (Cf. [I], [4].) I see thls progress in understanding not as the result of a revolution in the paradigm of computational linguistics in which one approach to natural language processing is abandoned for another, but rather as an expansion of our ideas of what both language and computers can do. We have been able to incorporate what we learned earlier in the game in a broader approach to more significant tasks. Certalnly within the last twenty years, the discipline of computational linguistics has expanded its view of its object of concern. Twenty years ago, that vlew was focussed on a central aspect of language, language as code [3]. The paradigmatic task of our dlsclpllne then was to transform a message encoded in one language into the same message encoded In another, using dictionaries and syntactic rules. (Originally, the task was not to translate but to transform the input as an ald to human translators.) Colncldentally, those were the days of batch processing and the typical inputs were scientific texts -- written monologues that existed as completed, static discourses before processlnK began. Then came interactive processing, brlnglng with It the opportunity for what is now called 'dialogue' between user and machine. At the same time, and perhaps not wholly colnoldentally, another aspect of language became salient for computational linguistics -- the aspect of language as behavior, with two or more people using the code to engage in purposeful ~ communication. The inputs now include discourse in which the amount of code to be interpreted continues to grow as participants in dialogue interact, and their interactions become part of the contexts for on-golng, dynamic interpretation. The paradigmatic task now Is to simulate in non- trivial ways the procedures by which people reach conclusions about what is in each other's minds. Performing this task still requires processing language as code, but it also requires analyzing the code in a context, to identify clues to the pragmatic meaning of its use. One way of representing thls enlarged task to conceive of it as requiring three concentric klnds of knowledge: a intrallngulatlc knowledge, or knowledge of the code • interllngulstlc knowledge, or knowledge of linguistic behavior • extrnllngulstlc knowledge, or knowledge of the perceptual and conceptual structures that language users have, the things they attend to and the goals they pursue The papers we will hear today range over techniques for identifylr~, representing and applying the various kinds of knowledge for the processing of discourse. McKeown exploits intrallngulstic knowledge for extralingulstic purposes. When the goal of a request for new information is not uniquely identlfiabte, she proposes to use syntactic transformations of the code of the request to clarify its ambiguities and ensure that its goal is subsequently understood. Shanon is also concerned with appropriateness of answers, and reports an investigation of the extralinguistic conceptual structuring of space that affects the pragmatic rules people follow in furnishing appropriate information in response to questions about where things are. Sidner identifies various kinds of intrallnguistic clues a discourse provides that indicate what entities occupy the focus of attention of discourse paticlpants as discourse proceeds, and the use of focusing (an extrallngulstlc prc~ess) tq control the inferences made in identifying the referents of pronominal anaphora. Levin and Hutchlnson analyze the clues in reports of spatial reasoning that lead to identification of the point of vlew of the speaker towards the entities talkeO about. Llke Sldner, they use syntactic clues and tlke Shanon, they seek to identify the conceptual structures that underlie behavior. Code and behavior interact with intentions in ways that are still mysterious but clearly important. The last two papers stress the fact that using language is intentional behavior and that understanding the purposes a discourse serves is a necessary part of understanding the discourse itself. Mann claims that dialogues are comprehensible only because participants provide clues to each other that make available knowledge of the goals being pursued. Alien and Perrault note that intention pervades all three layers of discourse, pointing out that, in order to be successful, a speaker must intend that the hearer recognize his intentions and infer his goals, but that these intentions are not signaled in any simple way in the code. In all of these papers, language is viewed as providing both codes for and clues to meaning, so that when it is used in discourse, Its forms can be decoded and their import can be grasped. As language users, we know that we can know, to a surprising extent, what someone else means for us to know. ~e also sometimes know that we don't know what someone else means rot us to know. As computational linguists, we are ~rying to figure out precisely how we know such things. REFERENCE3 [I] Chafe, W.L. 1977. Creativity in Verbalization and Its Implications for the Nature of Stored Knowledge. In: Freedle, R.O. (ed)., <<Discourse Production alld Comprehension>, Voi. I, pp. 41-55. Ablex: Nor,wood, New Je r say. [2] Grlce, P.H. 1975. Logic and Conversation. In: Davldson, D. and Har~n, G. (eds.), <<The Logic of Gr-mmAr>. Dlcker~on: Enclno, California [3] Halitday, M.A.K. 1977. Languor as Code and Language as Rahavlour. In: Lamb, S. and Makkai, A. (eds.), <¢Semlotlcs of Culture and Lan~p~age>. [~] Mlller, G.A. and Johnson-Lalrd, P.N. 1976. <<Lang1~e and Perception>. Harvard University Press: Cambridge, Massachusetts 65
1979
15
"Paraphrasing Using Given and New Information in a Question-Answer System Kathleen R. McKeown (...TRUNCATED)
1979
16
"WHERE qUESTIONS Benny Shanon The Hebrew University of Jerusalem Consider question (i), and(...TRUNCATED)
1979
17
"The Role Of Focussing in Interpretation of Pronouns Candace L. Sidner Artificial Intelligence(...TRUNCATED)
1979
18

Dataset Card for "ACL-papers"

More Information needed

Downloads last month
37
Edit dataset card