Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Utilizing the World Wide Web as an Encyclopedia: Extracting Term
Descriptions from Semi-Structured Texts | In this paper, we propose a method to extract descriptions of technical terms
from Web pages in order to utilize the World Wide Web as an encyclopedia. We
use linguistic patterns and HTML text structures to extract text fragments
containing term descriptions. We also use a language model to discard
extraneous descriptions, and a clustering method to summarize resultant
descriptions. We show the effectiveness of our method by way of experiments.
| 2,000 | Computation and Language |
A Novelty-based Evaluation Method for Information Retrieval | In information retrieval research, precision and recall have long been used
to evaluate IR systems. However, given that a number of retrieval systems
resembling one another are already available to the public, it is valuable to
retrieve novel relevant documents, i.e., documents that cannot be retrieved by
those existing systems. In view of this problem, we propose an evaluation
method that favors systems retrieving as many novel documents as possible. We
also used our method to evaluate systems that participated in the IREX
workshop.
| 2,000 | Computation and Language |
Applying Machine Translation to Two-Stage Cross-Language Information
Retrieval | Cross-language information retrieval (CLIR), where queries and documents are
in different languages, needs a translation of queries and/or documents, so as
to standardize both of them into a common representation. For this purpose, the
use of machine translation is an effective approach. However, computational
cost is prohibitive in translating large-scale document collections. To resolve
this problem, we propose a two-stage CLIR method. First, we translate a given
query into the document language, and retrieve a limited number of foreign
documents. Second, we machine translate only those documents into the user
language, and re-rank them based on the translation result. We also show the
effectiveness of our method by way of experiments using Japanese queries and
English technical documents.
| 2,000 | Computation and Language |
Tree-gram Parsing: Lexical Dependencies and Structural Relations | This paper explores the kinds of probabilistic relations that are important
in syntactic disambiguation. It proposes that two widely used kinds of
relations, lexical dependencies and structural relations, have complementary
disambiguation capabilities. It presents a new model based on structural
relations, the Tree-gram model, and reports experiments showing that structural
relations should benefit from enrichment by lexical dependencies.
| 2,007 | Computation and Language |
The Use of Instrumentation in Grammar Engineering | This paper explores the usefulness of a technique from software engineering,
code instrumentation, for the development of large-scale natural language
grammars. Information about the usage of grammar rules in test and corpus
sentences is used to improve grammar and testsuite, as well as adapting a
grammar to a specific genre. Results show that less than half of a
large-coverage grammar for German is actually tested by two large testsuites,
and that 10--30% of testing time is redundant. This methodology applied can be
seen as a re-use of grammar writing knowledge for testsuite compilation.
| 2,000 | Computation and Language |
Retrieval from Captioned Image Databases Using Natural Language
Processing | It might appear that natural language processing should improve the accuracy
of information retrieval systems, by making available a more detailed analysis
of queries and documents. Although past results appear to show that this is not
so, if the focus is shifted to short phrases rather than full documents, the
situation becomes somewhat different. The ANVIL system uses a natural language
technique to obtain high accuracy retrieval of images which have been annotated
with a descriptive textual caption. The natural language techniques also allow
additional contextual information to be derived from the relation between the
query and the caption, which can help users to understand the overall
collection of retrieval results. The techniques have been successfully used in
a information retrieval system which forms both a testbed for research and the
basis of a commercial system.
| 2,007 | Computation and Language |
Semantic interpretation of temporal information by abductive inference | Besides temporal information explicitly available in verbs and adjuncts, the
temporal interpretation of a text also depends on general world knowledge and
default assumptions. We will present a theory for describing the relation
between, on the one hand, verbs, their tenses and adjuncts and, on the other,
the eventualities and periods of time they represent and their relative
temporal locations.
The theory is formulated in logic and is a practical implementation of the
concepts described in Ness Schelkens et al. We will show how an abductive
resolution procedure can be used on this representation to extract temporal
information from texts.
| 2,009 | Computation and Language |
Abductive reasoning with temporal information | Texts in natural language contain a lot of temporal information, both
explicit and implicit. Verbs and temporal adjuncts carry most of the explicit
information, but for a full understanding general world knowledge and default
assumptions have to be taken into account. We will present a theory for
describing the relation between, on the one hand, verbs, their tenses and
adjuncts and, on the other, the eventualities and periods of time they
represent and their relative temporal locations, while allowing interaction
with general world knowledge.
The theory is formulated in an extension of first order logic and is a
practical implementation of the concepts described in Van Eynde 2001 and
Schelkens et al. 2000. We will show how an abductive resolution procedure can
be used on this representation to extract temporal information from texts. The
theory presented here is an extension of that in Verdoolaege et al. 2000,
adapted to VanEynde 2001, with a simplified and extended analysis of adjuncts
and with more emphasis on how a model can be constructed.
| 2,009 | Computation and Language |
Easy and Hard Constraint Ranking in OT: Algorithms and Complexity | We consider the problem of ranking a set of OT constraints in a manner
consistent with data.
We speed up Tesar and Smolensky's RCD algorithm to be linear on the number of
constraints. This finds a ranking so each attested form x_i beats or ties a
particular competitor y_i. We also generalize RCD so each x_i beats or ties all
possible competitors.
Alas, this more realistic version of learning has no polynomial algorithm
unless P=NP! Indeed, not even generation does. So one cannot improve
qualitatively upon brute force:
Merely checking that a single (given) ranking is consistent with given forms
is coNP-complete if the surface forms are fully observed and Delta_2^p-complete
if not. Indeed, OT generation is OptP-complete. As for ranking, determining
whether any consistent ranking exists is coNP-hard (but in Delta_2^p) if the
forms are fully observed, and Sigma_2^p-complete if not.
Finally, we show that generation and ranking are easier in derivational
theories: in P, and NP-complete.
| 2,000 | Computation and Language |
Multi-Syllable Phonotactic Modelling | This paper describes a novel approach to constructing phonotactic models. The
underlying theoretical approach to phonological description is the
multisyllable approach in which multiple syllable classes are defined that
reflect phonotactically idiosyncratic syllable subcategories. A new
finite-state formalism, OFS Modelling, is used as a tool for encoding,
automatically constructing and generalising phonotactic descriptions.
Language-independent prototype models are constructed which are instantiated on
the basis of data sets of phonological strings, and generalised with a
clustering algorithm. The resulting approach enables the automatic construction
of phonotactic models that encode arbitrarily close approximations of a
language's set of attested phonological forms. The approach is applied to the
construction of multi-syllable word-level phonotactic models for German,
English and Dutch.
| 2,000 | Computation and Language |
Taking Primitive Optimality Theory Beyond the Finite State | Primitive Optimality Theory (OTP) (Eisner, 1997a; Albro, 1998), a
computational model of Optimality Theory (Prince and Smolensky, 1993), employs
a finite state machine to represent the set of active candidates at each stage
of an Optimality Theoretic derivation, as well as weighted finite state
machines to represent the constraints themselves. For some purposes, however,
it would be convenient if the set of candidates were limited by some set of
criteria capable of being described only in a higher-level grammar formalism,
such as a Context Free Grammar, a Context Sensitive Grammar, or a Multiple
Context Free Grammar (Seki et al., 1991). Examples include reduplication and
phrasal stress models. Here we introduce a mechanism for OTP-like Optimality
Theory in which the constraints remain weighted finite state machines, but sets
of candidates are represented by higher-level grammars. In particular, we use
multiple context-free grammars to model reduplication in the manner of
Correspondence Theory (McCarthy and Prince, 1995), and develop an extended
version of the Earley Algorithm (Earley, 1970) to apply the constraints to a
reduplicating candidate set.
| 2,000 | Computation and Language |
Finite-State Phonology: Proceedings of the 5th Workshop of the ACL
Special Interest Group in Computational Phonology (SIGPHON) | Home page of the workshop proceedings, with pointers to the individually
archived papers. Includes front matter from the printed version of the
proceedings.
| 2,000 | Computation and Language |
Mathematical Model of Word Length on the Basis of the Cebanov-Fucks
Distribution with Uniform Parameter Distribution | The data on 13 typologically different languages have been processed using a
two-parameter word length model, based on 1-displaced uniform Poisson
distribution. Statistical dependencies of the 2nd parameter on the 1st one are
revealed for the German texts and genre of letters.
| 2,001 | Computation and Language |
Quantitative Neural Network Model of the Tip-of-the-Tongue Phenomenon
Based on Synthesized Memory-Psycholinguistic-Metacognitive Approach | A new three-stage computer artificial neural network model of the
tip-of-the-tongue phenomenon is proposed. Each word's node is build from some
interconnected learned auto-associative two-layer neural networks each of which
represents separate word's semantic, lexical, or phonological components. The
model synthesizes memory, psycholinguistic, and metamemory approaches, bridges
speech errors and naming chronometry research traditions, and can explain
quantitatively many tip-of-the-tongue effects.
| 2,007 | Computation and Language |
Two-parameter Model of Word Length "Language - Genre" | A two-parameter model of word length measured by the number of syllables
comprising it is proposed. The first parameter is dependent on language type,
the second one - on text genre and reflects the degree of completion of
synergetic processes of language optimization.
| 2,007 | Computation and Language |
Magical Number Seven Plus or Minus Two: Syntactic Structure Recognition
in Japanese and English Sentences | George A. Miller said that human beings have only seven chunks in short-term
memory, plus or minus two. We counted the number of bunsetsus (phrases) whose
modifiees are undetermined in each step of an analysis of the dependency
structure of Japanese sentences, and which therefore must be stored in
short-term memory. The number was roughly less than nine, the upper bound of
seven plus or minus two. We also obtained similar results with English
sentences under the assumption that human beings recognize a series of words,
such as a noun phrase (NP), as a unit. This indicates that if we assume that
the human cognitive units in Japanese and English are bunsetsu and NP
respectively, analysis will support Miller's $7 \pm 2$ theory.
| 2,001 | Computation and Language |
A Machine-Learning Approach to Estimating the Referential Properties of
Japanese Noun Phrases | The referential properties of noun phrases in the Japanese language, which
has no articles, are useful for article generation in Japanese-English machine
translation and for anaphora resolution in Japanese noun phrases. They are
generally classified as generic noun phrases, definite noun phrases, and
indefinite noun phrases. In the previous work, referential properties were
estimated by developing rules that used clue words. If two or more rules were
in conflict with each other, the category having the maximum total score given
by the rules was selected as the desired category. The score given by each rule
was established by hand, so the manpower cost was high. In this work, we
automatically adjusted these scores by using a machine-learning method and
succeeded in reducing the amount of manpower needed to adjust these scores.
| 2,001 | Computation and Language |
Meaning Sort - Three examples: dictionary construction, tagged corpus
construction, and information presentation system | It is often useful to sort words into an order that reflects relations among
their meanings as obtained by using a thesaurus. In this paper, we introduce a
method of arranging words semantically by using several types of `{\sf is-a}'
thesauri and a multi-dimensional thesaurus. We also describe three major
applications where a meaning sort is useful and show the effectiveness of a
meaning sort. Since there is no doubt that a word list in meaning-order is
easier to use than a word list in some random order, a meaning sort, which can
easily produce a word list in meaning-order, must be useful and effective.
| 2,001 | Computation and Language |
CRL at Ntcir2 | We have developed systems of two types for NTCIR2. One is an enhenced version
of the system we developed for NTCIR1 and IREX. It submitted retrieval results
for JJ and CC tasks. A variety of parameters were tried with the system. It
used such characteristics of newspapers as locational information in the CC
tasks. The system got good results for both of the tasks. The other system is a
portable system which avoids free parameters as much as possible. The system
submitted retrieval results for JJ, JE, EE, EJ, and CC tasks. The system
automatically determined the number of top documents and the weight of the
original query used in automatic-feedback retrieval. It also determined
relevant terms quite robustly. For EJ and JE tasks, it used document expansion
to augment the initial queries. It achieved good results, except on the CC
tasks.
| 2,007 | Computation and Language |
A Decision Tree of Bigrams is an Accurate Predictor of Word Sense | This paper presents a corpus-based approach to word sense disambiguation
where a decision tree assigns a sense to an ambiguous word based on the bigrams
that occur nearby. This approach is evaluated using the sense-tagged corpora
from the 1998 SENSEVAL word sense disambiguation exercise. It is more accurate
than the average results reported for 30 of 36 words, and is more accurate than
the best results for 19 of 36 words.
| 2,007 | Computation and Language |
Type Arithmetics: Computation based on the theory of types | The present paper shows meta-programming turn programming, which is rich
enough to express arbitrary arithmetic computations. We demonstrate a type
system that implements Peano arithmetics, slightly generalized to negative
numbers. Certain types in this system denote numerals. Arithmetic operations on
such types-numerals - addition, subtraction, and even division - are expressed
as type reduction rules executed by a compiler. A remarkable trait is that
division by zero becomes a type error - and reported as such by a compiler.
| 2,007 | Computation and Language |
Coaxing Confidences from an Old Friend: Probabilistic Classifications
from Transformation Rule Lists | Transformation-based learning has been successfully employed to solve many
natural language processing problems. It has many positive features, but one
drawback is that it does not provide estimates of class membership
probabilities.
In this paper, we present a novel method for obtaining class membership
probabilities from a transformation-based rule list classifier. Three
experiments are presented which measure the modeling accuracy and cross-entropy
of the probabilistic classifier on unseen data and the degree to which the
output probabilities from the classifier can be used to estimate confidences in
its classification decisions.
The results of these experiments show that, for the task of text chunking,
the estimates produced by this technique are more informative than those
generated by a state-of-the-art decision tree.
| 2,000 | Computation and Language |
Microplanning with Communicative Intentions: The SPUD System | The process of microplanning encompasses a range of problems in Natural
Language Generation (NLG), such as referring expression generation, lexical
choice, and aggregation, problems in which a generator must bridge underlying
domain-specific representations and general linguistic representations. In this
paper, we describe a uniform approach to microplanning based on declarative
representations of a generator's communicative intent. These representations
describe the results of NLG: communicative intent associates the concrete
linguistic structure planned by the generator with inferences that show how the
meaning of that structure communicates needed information about some
application domain in the current discourse context. Our approach, implemented
in the SPUD (sentence planning using description) microplanner, uses the
lexicalized tree-adjoining grammar formalism (LTAG) to connect structure to
meaning and uses modal logic programming to connect meaning to context. At the
same time, communicative intent representations provide a resource for the
process of NLG. Using representations of communicative intent, a generator can
augment the syntax, semantics and pragmatics of an incomplete sentence
simultaneously, and can assess its progress on the various problems of
microplanning incrementally. The declarative formulation of communicative
intent translates into a well-defined methodology for designing grammatical and
conceptual resources which the generator can use to achieve desired
microplanning behavior in a specified domain.
| 2,007 | Computation and Language |
Correction of Errors in a Modality Corpus Used for Machine Translation
by Using Machine-learning Method | We performed corpus correction on a modality corpus for machine translation
by using such machine-learning methods as the maximum-entropy method. We thus
constructed a high-quality modality corpus based on corpus correction. We
compared several kinds of methods for corpus correction in our experiments and
developed a good method for corpus correction.
| 2,007 | Computation and Language |
Man [and Woman] vs. Machine: A Case Study in Base Noun Phrase Learning | A great deal of work has been done demonstrating the ability of machine
learning algorithms to automatically extract linguistic knowledge from
annotated corpora. Very little work has gone into quantifying the difference in
ability at this task between a person and a machine. This paper is a first step
in that direction.
| 1,999 | Computation and Language |
Rule Writing or Annotation: Cost-efficient Resource Usage for Base Noun
Phrase Chunking | This paper presents a comprehensive empirical comparison between two
approaches for developing a base noun phrase chunker: human rule writing and
active learning using interactive real-time human annotation. Several novel
variations on active learning are investigated, and underlying cost models for
cross-modal machine learning comparison are presented and explored. Results
show that it is more efficient and more successful by several measures to train
a system using active learning annotation rather than hand-crafted rule writing
at a comparable level of human labor investment.
| 2,000 | Computation and Language |
A Complete WordNet1.5 to WordNet1.6 Mapping | We describe a robust approach for linking already existing lexical/semantic
hierarchies. We use a constraint satisfaction algorithm (relaxation labelling)
to select --among a set of candidates-- the node in a target taxonomy that
bests matches each node in a source taxonomy. In this paper we present the
complete mapping of the nominal, verbal, adjectival and adverbial parts of
WordNet 1.5 onto WordNet 1.6.
| 2,007 | Computation and Language |
Joint and conditional estimation of tagging and parsing models | This paper compares two different ways of estimating statistical language
models. Many statistical NLP tagging and parsing models are estimated by
maximizing the (joint) likelihood of the fully-observed training data. However,
since these applications only require the conditional probability
distributions, these distributions can in principle be learnt by maximizing the
conditional likelihood of the training data. Perhaps somewhat surprisingly,
models estimated by maximizing the joint were superior to models estimated by
maximizing the conditional, even though some of the latter models intuitively
had access to ``more information''.
| 2,007 | Computation and Language |
Probabilistic top-down parsing and language modeling | This paper describes the functioning of a broad-coverage probabilistic
top-down parser, and its application to the problem of language modeling for
speech recognition. The paper first introduces key notions in language modeling
and probabilistic parsing, and briefly reviews some previous approaches to
using syntactic structure for language modeling. A lexicalized probabilistic
top-down parser is then presented, which performs very well, in terms of both
the accuracy of returned parses and the efficiency with which they are found,
relative to the best broad-coverage statistical parsers. A new language model
which utilizes probabilistic top-down parsing is then outlined, and empirical
results show that it improves upon previous work in test corpus perplexity.
Interpolation with a trigram model yields an exceptional improvement relative
to the improvement observed by other models, demonstrating the degree to which
the information captured by our parsing model is orthogonal to that captured by
a trigram model. A small recognition experiment also demonstrates the utility
of the model.
| 2,007 | Computation and Language |
Robust Probabilistic Predictive Syntactic Processing | This thesis presents a broad-coverage probabilistic top-down parser, and its
application to the problem of language modeling for speech recognition. The
parser builds fully connected derivations incrementally, in a single pass from
left-to-right across the string. We argue that the parsing approach that we
have adopted is well-motivated from a psycholinguistic perspective, as a model
that captures probabilistic dependencies between lexical items, as part of the
process of building connected syntactic structures. The basic parser and
conditional probability models are presented, and empirical results are
provided for its parsing accuracy on both newspaper text and spontaneous
telephone conversations. Modifications to the probability model are presented
that lead to improved performance. A new language model which uses the output
of the parser is then defined. Perplexity and word error rate reduction are
demonstrated over trigram models, even when the trigram is trained on
significantly more data. Interpolation on a word-by-word basis with a trigram
model yields additional improvements.
| 2,007 | Computation and Language |
Generating a 3D Simulation of a Car Accident from a Written Description
in Natural Language: the CarSim System | This paper describes a prototype system to visualize and animate 3D scenes
from car accident reports, written in French. The problem of generating such a
3D simulation can be divided into two subtasks: the linguistic analysis and the
virtual scene generation. As a means of communication between these two
modules, we first designed a template formalism to represent a written accident
report. The CarSim system first processes written reports, gathers relevant
information, and converts it into a formal description. Then, it creates the
corresponding 3D scene and animates the vehicles.
| 2,007 | Computation and Language |
The OLAC Metadata Set and Controlled Vocabularies | As language data and associated technologies proliferate and as the language
resources community rapidly expands, it has become difficult to locate and
reuse existing resources. Are there any lexical resources for such-and-such a
language? What tool can work with transcripts in this particular format? What
is a good format to use for linguistic data of this type? Questions like these
dominate many mailing lists, since web search engines are an unreliable way to
find language resources. This paper describes a new digital infrastructure for
language resource discovery, based on the Open Archives Initiative, and called
OLAC -- the Open Language Archives Community. The OLAC Metadata Set and the
associated controlled vocabularies facilitate consistent description and
focussed searching. We report progress on the metadata set and controlled
vocabularies, describing current issues and soliciting input from the language
resources community.
| 2,001 | Computation and Language |
Historical Dynamics of Lexical System as Random Walk Process | It is offered to consider word meanings changes in diachrony as
semicontinuous random walk with reflecting and swallowing screens. The basic
characteristics of word life cycle are defined. Verification of the model has
been realized on the data of Russian words distribution on various age periods.
| 2,007 | Computation and Language |
Integrating Prosodic and Lexical Cues for Automatic Topic Segmentation | We present a probabilistic model that uses both prosodic and lexical cues for
the automatic segmentation of speech into topically coherent units. We propose
two methods for combining lexical and prosodic information using hidden Markov
models and decision trees. Lexical information is obtained from a speech
recognizer, and prosodic features are extracted automatically from speech
waveforms. We evaluate our approach on the Broadcast News corpus, using the
DARPA-TDT evaluation metrics. Results show that the prosodic model alone is
competitive with word-based segmentation methods. Furthermore, we achieve a
significant reduction in error by combining the prosodic and word-based
knowledge sources.
| 2,001 | Computation and Language |
Computational properties of environment-based disambiguation | The standard pipeline approach to semantic processing, in which sentences are
morphologically and syntactically resolved to a single tree before they are
interpreted, is a poor fit for applications such as natural language
interfaces. This is because the environment information, in the form of the
objects and events in the application's run-time environment, cannot be used to
inform parsing decisions unless the input sentence is semantically analyzed,
but this does not occur until after parsing in the single-tree semantic
architecture. This paper describes the computational properties of an
alternative architecture, in which semantic analysis is performed on all
possible interpretations during parsing, in polynomial time.
| 2,007 | Computation and Language |
Organizing Encyclopedic Knowledge based on the Web and its Application
to Question Answering | We propose a method to generate large-scale encyclopedic knowledge, which is
valuable for much NLP research, based on the Web. We first search the Web for
pages containing a term in question. Then we use linguistic patterns and HTML
structures to extract text fragments describing the term. Finally, we organize
extracted term descriptions based on word senses and domains. In addition, we
apply an automatically generated encyclopedia to a question answering system
targeting the Japanese Information-Technology Engineers Examination.
| 2,001 | Computation and Language |
File mapping Rule-based DBMS and Natural Language Processing | This paper describes the system of storage, extract and processing of
information structured similarly to the natural language. For recursive
inference the system uses the rules having the same representation, as the
data. The environment of storage of information is provided with the File
Mapping (SHM) mechanism of operating system. In the paper the main principles
of construction of dynamic data structure and language for record of the
inference rules are stated; the features of available implementation are
considered and the description of the application realizing semantic
information retrieval on the natural language is given.
| 2,007 | Computation and Language |
Iterative Residual Rescaling: An Analysis and Generalization of LSI | We consider the problem of creating document representations in which
inter-document similarity measurements correspond to semantic similarity. We
first present a novel subspace-based framework for formalizing this task. Using
this framework, we derive a new analysis of Latent Semantic Indexing (LSI),
showing a precise relationship between its performance and the uniformity of
the underlying distribution of documents over topics. This analysis helps
explain the improvements gained by Ando's (2000) Iterative Residual Rescaling
(IRR) algorithm: IRR can compensate for distributional non-uniformity. A
further benefit of our framework is that it provides a well-motivated,
effective method for automatically determining the rescaling factor IRR depends
on, leading to further improvements. A series of experiments over various
settings and with several evaluation metrics validates our claims.
| 2,001 | Computation and Language |
Stacking classifiers for anti-spam filtering of e-mail | We evaluate empirically a scheme for combining classifiers, known as stacked
generalization, in the context of anti-spam filtering, a novel cost-sensitive
application of text categorization. Unsolicited commercial e-mail, or "spam",
floods mailboxes, causing frustration, wasting bandwidth, and exposing minors
to unsuitable content. Using a public corpus, we show that stacking can improve
the efficiency of automatically induced anti-spam filters, and that such
filters can be used in real-life applications.
| 2,001 | Computation and Language |
Using the Distribution of Performance for Studying Statistical NLP
Systems and Corpora | Statistical NLP systems are frequently evaluated and compared on the basis of
their performances on a single split of training and test data. Results
obtained using a single split are, however, subject to sampling noise. In this
paper we argue in favour of reporting a distribution of performance figures,
obtained by resampling the training data, rather than a single number. The
additional information from distributions can be used to make statistically
quantified statements about differences across parameter settings, systems, and
corpora.
| 2,007 | Computation and Language |
Modeling informational novelty in a conversational system with a hybrid
statistical and grammar-based approach to natural language generation | We present a hybrid statistical and grammar-based system for surface natural
language generation (NLG) that uses grammar rules, conditions on using those
grammar rules, and corpus statistics to determine the word order. We also
describe how this surface NLG module is implemented in a prototype
conversational system, and how it attempts to model informational novelty by
varying the word order. Using a combination of rules and statistical
information, the conversational system expresses the novel information
differently than the given information, based on the run-time dialog state. We
also discuss our plans for evaluating the generation strategy.
| 2,001 | Computation and Language |
The Role of Conceptual Relations in Word Sense Disambiguation | We explore many ways of using conceptual distance measures in Word Sense
Disambiguation, starting with the Agirre-Rigau conceptual density measure. We
use a generalized form of this measure, introducing many (parameterized)
refinements and performing an exhaustive evaluation of all meaningful
combinations. We finally obtain a 42% improvement over the original algorithm,
and show that measures of conceptual distance are not worse indicators for
sense disambiguation than measures based on word-coocurrence (exemplified by
the Lesk algorithm). Our results, however, reinforce the idea that only a
combination of different sources of knowledge might eventually lead to accurate
word sense disambiguation.
| 2,007 | Computation and Language |
Looking Under the Hood : Tools for Diagnosing your Question Answering
Engine | In this paper we analyze two question answering tasks : the TREC-8 question
answering task and a set of reading comprehension exams. First, we show that
Q/A systems perform better when there are multiple answer opportunities per
question. Next, we analyze common approaches to two subproblems: term overlap
for answer sentence identification, and answer typing for short answer
extraction. We present general tools for analyzing the strengths and
limitations of techniques for these subproblems. Our results quantify the
limitations of both term overlap and answer typing to distinguish between
competing answer candidates.
| 2,007 | Computation and Language |
Three-Stage Quantitative Neural Network Model of the Tip-of-the-Tongue
Phenomenon | A new three-stage computer artificial neural network model of the
tip-of-the-tongue phenomenon is shortly described, and its stochastic nature
was demonstrated. A way to calculate strength and appearance probability of
tip-of-the-tongue states, neural network mechanism of feeling-of-knowing
phenomenon are proposed. The model synthesizes memory, psycholinguistic, and
metamemory approaches, bridges speech errors and naming chronometry research
traditions. A model analysis of a tip-of-the-tongue case from Anton Chekhov's
short story 'A Horsey Name' is performed. A new 'throw-up-one's-arms effect' is
defined.
| 2,007 | Computation and Language |
Introduction to the CoNLL-2001 Shared Task: Clause Identification | We describe the CoNLL-2001 shared task: dividing text into clauses. We give
background information on the data sets, present a general overview of the
systems that have taken part in the shared task and briefly discuss their
performance.
| 2,001 | Computation and Language |
Learning Computational Grammars | This paper reports on the "Learning Computational Grammars" (LCG) project, a
postdoc network devoted to studying the application of machine learning
techniques to grammars suitable for computational use. We were interested in a
more systematic survey to understand the relevance of many factors to the
success of learning, esp. the availability of annotated data, the kind of
dependencies in the data, and the availability of knowledge bases (grammars).
We focused on syntax, esp. noun phrase (NP) syntax.
| 2,001 | Computation and Language |
Combining a self-organising map with memory-based learning | Memory-based learning (MBL) has enjoyed considerable success in corpus-based
natural language processing (NLP) tasks and is thus a reliable method of
getting a high-level of performance when building corpus-based NLP systems.
However there is a bottleneck in MBL whereby any novel testing item has to be
compared against all the training items in memory base. For this reason there
has been some interest in various forms of memory editing whereby some method
of selecting a subset of the memory base is employed to reduce the number of
comparisons. This paper investigates the use of a modified self-organising map
(SOM) to select a subset of the memory items for comparison. This method
involves reducing the number of comparisons to a value proportional to the
square root of the number of training items. The method is tested on the
identification of base noun-phrases in the Wall Street Journal corpus, using
sections 15 to 18 for training and section 20 for testing.
| 2,001 | Computation and Language |
Applying Natural Language Generation to Indicative Summarization | The task of creating indicative summaries that help a searcher decide whether
to read a particular document is a difficult task. This paper examines the
indicative summarization task from a generation perspective, by first analyzing
its required content via published guidelines and corpus analysis. We show how
these summaries can be factored into a set of document features, and how an
implemented content planner uses the topicality document feature to create
indicative multidocument query-based summaries.
| 2,007 | Computation and Language |
Multidimensional Transformation-Based Learning | This paper presents a novel method that allows a machine learning algorithm
following the transformation-based learning paradigm \cite{brill95:tagging} to
be applied to multiple classification tasks by training jointly and
simultaneously on all fields. The motivation for constructing such a system
stems from the observation that many tasks in natural language processing are
naturally composed of multiple subtasks which need to be resolved
simultaneously; also tasks usually learned in isolation can possibly benefit
from being learned in a joint framework, as the signals for the extra tasks
usually constitute inductive bias.
The proposed algorithm is evaluated in two experiments: in one, the system is
used to jointly predict the part-of-speech and text chunks/baseNP chunks of an
English corpus; and in the second it is used to learn the joint prediction of
word segment boundaries and part-of-speech tagging for Chinese. The results
show that the simultaneous learning of multiple tasks does achieve an
improvement in each task upon training the same tasks sequentially. The
part-of-speech tagging result of 96.63% is state-of-the-art for individual
systems on the particular train/test split.
| 2,001 | Computation and Language |
A Bit of Progress in Language Modeling | In the past several years, a number of different language modeling
improvements over simple trigram models have been found, including caching,
higher-order n-grams, skipping, interpolated Kneser-Ney smoothing, and
clustering. We present explorations of variations on, or of the limits of, each
of these techniques, including showing that sentence mixture models may have
more potential. While all of these techniques have been studied separately,
they have rarely been studied in combination. We find some significant
interactions, especially with smoothing and clustering techniques. We compare a
combination of all techniques together to a Katz smoothed trigram model with no
count cutoffs. We achieve perplexity reductions between 38% and 50% (1 bit of
entropy), depending on training data size, as well as a word error rate
reduction of 8.9%. Our perplexity reductions are perhaps the highest reported
compared to a fair baseline. This is the extended version of the paper; it
contains additional details and proofs, and is designed to be a good
introduction to the state of the art in language modeling.
| 2,007 | Computation and Language |
Classes for Fast Maximum Entropy Training | Maximum entropy models are considered by many to be one of the most promising
avenues of language modeling research. Unfortunately, long training times make
maximum entropy research difficult. We present a novel speedup technique: we
change the form of the model to use classes. Our speedup works by creating two
maximum entropy models, the first of which predicts the class of each word, and
the second of which predicts the word itself. This factoring of the model leads
to fewer non-zero indicator functions, and faster normalization, achieving
speedups of up to a factor of 35 over one of the best previous techniques. It
also results in typically slightly lower perplexities. The same trick can be
used to speed training of other machine learning techniques, e.g. neural
networks, applied to any problem with a large number of outputs, such as
language modeling.
| 2,001 | Computation and Language |
Portability of Syntactic Structure for Language Modeling | The paper presents a study on the portability of statistical syntactic
knowledge in the framework of the structured language model (SLM). We
investigate the impact of porting SLM statistics from the Wall Street Journal
(WSJ) to the Air Travel Information System (ATIS) domain. We compare this
approach to applying the Microsoft rule-based parser (NLPwin) for the ATIS data
and to using a small amount of data manually parsed at UPenn for gathering the
intial SLM statistics. Surprisingly, despite the fact that it performs modestly
in perplexity (PPL), the model initialized on WSJ parses outperforms the other
initialization methods based on in-domain annotated data, achieving a
significant 0.4% absolute and 7% relative reduction in word error rate (WER)
over a baseline system whose word error rate is 5.8%; the improvement measured
relative to the minimum WER achievable on the N-best lists we worked with is
12%.
| 2,001 | Computation and Language |
Information Extraction Using the Structured Language Model | The paper presents a data-driven approach to information extraction (viewed
as template filling) using the structured language model (SLM) as a statistical
parser. The task of template filling is cast as constrained parsing using the
SLM. The model is automatically trained from a set of sentences annotated with
frame/slot labels and spans. Training proceeds in stages: first a constrained
syntactic parser is trained such that the parses on training data meet the
specified semantic spans, then the non-terminal labels are enriched to contain
semantic information and finally a constrained syntactic+semantic parser is
trained on the parse trees resulting from the previous stage. Despite the small
amount of training data used, the model is shown to outperform the slot level
accuracy of a simple semantic grammar authored manually for the MiPad ---
personal information management --- task.
| 2,001 | Computation and Language |
Anaphora and Discourse Structure | We argue in this paper that many common adverbial phrases generally taken to
signal a discourse relation between syntactically connected units within
discourse structure, instead work anaphorically to contribute relational
meaning, with only indirect dependence on discourse structure. This allows a
simpler discourse structure to provide scaffolding for compositional semantics,
and reveals multiple ways in which the relational meaning conveyed by adverbial
connectives can interact with that associated with discourse structure. We
conclude by sketching out a lexicalised grammar for discourse that facilitates
discourse interpretation as a product of compositional rules, anaphor
resolution and inference.
| 2,007 | Computation and Language |
Conceptual Analysis of Lexical Taxonomies: The Case of WordNet Top-Level | In this paper we propose an analysis and an upgrade of WordNet's top-level
synset taxonomy. We briefly review WordNet and identify its main semantic
limitations. Some principles from a forthcoming OntoClean methodology are
applied to the ontological analysis of WordNet. A revised top-level taxonomy is
proposed, which is meant to be more conceptually rigorous, cognitively
transparent, and efficiently exploitable in several applications.
| 2,007 | Computation and Language |
Boosting Trees for Anti-Spam Email Filtering | This paper describes a set of comparative experiments for the problem of
automatically filtering unwanted electronic mail messages. Several variants of
the AdaBoost algorithm with confidence-rated predictions [Schapire & Singer,
99] have been applied, which differ in the complexity of the base learners
considered. Two main conclusions can be drawn from our experiments: a) The
boosting-based methods clearly outperform the baseline learning algorithms
(Naive Bayes and Induction of Decision Trees) on the PU1 corpus, achieving very
high levels of the F1 measure; b) Increasing the complexity of the base
learners allows to obtain better ``high-precision'' classifiers, which is a
very important issue when misclassification costs are considered.
| 2,001 | Computation and Language |
Modelling Semantic Association and Conceptual Inheritance for Semantic
Analysis | Allowing users to interact through language borders is an interesting
challenge for information technology. For the purpose of a computer assisted
language learning system, we have chosen icons for representing meaning on the
input interface, since icons do not depend on a particular language. However, a
key limitation of this type of communication is the expression of articulated
ideas instead of isolated concepts. We propose a method to interpret sequences
of icons as complex messages by reconstructing the relations between concepts,
so as to build conceptual graphs able to represent meaning and to be used for
natural language sentence generation. This method is based on an electronic
dictionary containing semantic information.
| 2,001 | Computation and Language |
Integrating Multiple Knowledge Sources for Robust Semantic Parsing | This work explores a new robust approach for Semantic Parsing of unrestricted
texts. Our approach considers Semantic Parsing as a Consistent Labelling
Problem (CLP), allowing the integration of several knowledge types (syntactic
and semantic) obtained from different sources (linguistic and statistic). The
current implementation obtains 95% accuracy in model identification and 72% in
case-role filling.
| 2,001 | Computation and Language |
Learning class-to-class selectional preferences | Selectional preference learning methods have usually focused on word-to-class
relations, e.g., a verb selects as its subject a given nominal class. This
papers extends previous statistical models to class-to-class preferences, and
presents a model that learns selectional preferences for classes of verbs. The
motivation is twofold: different senses of a verb may have different
preferences, and some classes of verbs can share preferences. The model is
tested on a word sense disambiguation task which uses subject-verb and
object-verb relationships extracted from a small sense-disambiguated corpus.
| 2,001 | Computation and Language |
Knowledge Sources for Word Sense Disambiguation | Two kinds of systems have been defined during the long history of WSD:
principled systems that define which knowledge types are useful for WSD, and
robust systems that use the information sources at hand, such as, dictionaries,
light-weight ontologies or hand-tagged corpora. This paper tries to systematize
the relation between desired knowledge types and actual information sources. We
also compare the results for a wide range of algorithms that have been
evaluated on a common test setting in our research group. We hope that this
analysis will help change the shift from systems based on information sources
to systems based on knowledge sources. This study might also shed some light on
semi-automatic acquisition of desired knowledge types from existing resources.
| 2,001 | Computation and Language |
Enriching WordNet concepts with topic signatures | This paper explores the possibility of enriching the content of existing
ontologies. The overall goal is to overcome the lack of topical links among
concepts in WordNet. Each concept is to be associated to a topic signature,
i.e., a set of related words with associated weights. The signatures can be
automatically constructed from the WWW or from sense-tagged corpora. Both
approaches are compared and evaluated on a word sense disambiguation task. The
results show that it is possible to construct clean signatures from the WWW
using some filtering techniques.
| 2,001 | Computation and Language |
Testing for Mathematical Lineation in Jim Crace's "Quarantine" and T. S.
Eliot's "Four Quartets" | The mathematical distinction between prose and verse may be detected in
writings that are not apparently lineated, for example in T. S. Eliot's "Burnt
Norton", and Jim Crace's "Quarantine". In this paper we offer comments on
appropriate statistical methods for such work, and also on the nature of formal
innovation in these two texts. Additional remarks are made on the roots of
lineation as a metrical form, and on the prose-verse continuum.
| 2,007 | Computation and Language |
The Open Language Archives Community and Asian Language Resources | The Open Language Archives Community (OLAC) is a new project to build a
worldwide system of federated language archives based on the Open Archives
Initiative and the Dublin Core Metadata Initiative. This paper aims to
disseminate the OLAC vision to the language resources community in Asia, and to
show language technologists and linguists how they can document their tools and
data in such a way that others can easily discover them. We describe OLAC and
the OLAC Metadata Set, then discuss two key issues in the Asian context:
language classification and multilingual resource classification.
| 2,001 | Computation and Language |
Richer Syntactic Dependencies for Structured Language Modeling | The paper investigates the use of richer syntactic dependencies in the
structured language model (SLM). We present two simple methods of enriching the
dependencies in the syntactic parse trees used for intializing the SLM. We
evaluate the impact of both methods on the perplexity (PPL) and
word-error-rate(WER, N-best rescoring) performance of the SLM. We show that the
new model achieves an improvement in PPL and WER over the baseline results
reported using the SLM on the UPenn Treebank and Wall Street Journal (WSJ)
corpora, respectively.
| 2,007 | Computation and Language |
Part-of-Speech Tagging with Two Sequential Transducers | We present a method of constructing and using a cascade consisting of a left-
and a right-sequential finite-state transducer (FST), T1 and T2, for
part-of-speech (POS) disambiguation. Compared to an HMM, this FST cascade has
the advantage of significantly higher processing speed, but at the cost of
slightly lower accuracy. Applications such as Information Retrieval, where the
speed can be more important than accuracy, could benefit from this approach.
In the process of tagging, we first assign every word a unique ambiguity
class c_i that can be looked up in a lexicon encoded by a sequential FST. Every
c_i is denoted by a single symbol, e.g. [ADJ_NOUN], although it represents a
set of alternative tags that a given word can occur with. The sequence of the
c_i of all words of one sentence is the input to our FST cascade. It is mapped
by T1, from left to right, to a sequence of reduced ambiguity classes r_i.
Every r_i is denoted by a single symbol, although it represents a set of
alternative tags. Intuitively, T1 eliminates the less likely tags from c_i,
thus creating r_i. Finally, T2 maps the sequence of r_i, from right to left, to
a sequence of single POS tags t_i. Intuitively, T2 selects the most likely t_i
from every r_i.
The probabilities of all t_i, r_i, and c_i are used only at compile time, not
at run time. They do not (directly) occur in the FSTs, but are "implicitly
contained" in their structure.
| 2,000 | Computation and Language |
What is the minimal set of fragments that achieves maximal parse
accuracy? | We aim at finding the minimal set of fragments which achieves maximal parse
accuracy in Data Oriented Parsing. Experiments with the Penn Wall Street
Journal treebank show that counts of almost arbitrary fragments within parse
trees are important, leading to improved parse accuracy over previous models
tested on this treebank (a precision of 90.8% and a recall of 90.6%). We
isolate some dependency relations which previous models neglect but which
contribute to higher parse accuracy.
| 2,001 | Computation and Language |
Combining semantic and syntactic structure for language modeling | Structured language models for speech recognition have been shown to remedy
the weaknesses of n-gram models. All current structured language models are,
however, limited in that they do not take into account dependencies between
non-headwords. We show that non-headword dependencies contribute to
significantly improved word error rate, and that a data-oriented parsing model
trained on semantically and syntactically annotated data can exploit these
dependencies. This paper also contains the first DOP model trained by means of
a maximum likelihood reestimation procedure, which solves some of the
theoretical shortcomings of previous DOP models.
| 2,000 | Computation and Language |
Generating Multilingual Personalized Descriptions of Museum Exhibits -
The M-PIRO Project | This paper provides an overall presentation of the M-PIRO project. M-PIRO is
developing technology that will allow museums to generate automatically textual
or spoken descriptions of exhibits for collections available over the Web or in
virtual reality environments. The descriptions are generated in several
languages from information in a language-independent database and small
fragments of text, and they can be tailored according to the backgrounds of the
users, their ages, and their previous interaction with the system. An authoring
tool allows museum curators to update the system's database and to control the
language and content of the resulting descriptions. Although the project is
still in progress, a Web-based demonstrator that supports English, Greek and
Italian is already available, and it is used throughout the paper to highlight
the capabilities of the emerging technology.
| 2,007 | Computation and Language |
A procedure for unsupervised lexicon learning | We describe an incremental unsupervised procedure to learn words from
transcribed continuous speech. The algorithm is based on a conservative and
traditional statistical model, and results of empirical tests show that it is
competitive with other algorithms that have been proposed recently for this
task.
| 2,001 | Computation and Language |
A Statistical Model for Word Discovery in Transcribed Speech | A statistical model for segmentation and word discovery in continuous speech
is presented. An incremental unsupervised learning algorithm to infer word
boundaries based on this model is described. Results of empirical tests showing
that the algorithm is competitive with other models that have been used for
similar tasks are also presented.
| 2,001 | Computation and Language |
Using a Support-Vector Machine for Japanese-to-English Translation of
Tense, Aspect, and Modality | This paper describes experiments carried out using a variety of
machine-learning methods, including the k-nearest neighborhood method that was
used in a previous study, for the translation of tense, aspect, and modality.
It was found that the support-vector machine method was the most precise of all
the methods tested.
| 2,001 | Computation and Language |
Part of Speech Tagging in Thai Language Using Support Vector Machine | The elastic-input neuro tagger and hybrid tagger, combined with a neural
network and Brill's error-driven learning, have already been proposed for the
purpose of constructing a practical tagger using as little training data as
possible. When a small Thai corpus is used for training, these taggers have
tagging accuracies of 94.4% and 95.5% (accounting only for the ambiguous words
in terms of the part of speech), respectively. In this study, in order to
construct more accurate taggers we developed new tagging methods using three
machine learning methods: the decision-list, maximum entropy, and support
vector machine methods. We then performed tagging experiments by using these
methods. Our results showed that the support vector machine method has the best
precision (96.1%), and that it is capable of improving the accuracy of tagging
in the Thai language. Finally, we theoretically examined all these methods and
discussed how the improvements were achived.
| 2,001 | Computation and Language |
Universal Model for Paraphrasing -- Using Transformation Based on a
Defined Criteria -- | This paper describes a universal model for paraphrasing that transforms
according to defined criteria. We showed that by using different criteria we
could construct different kinds of paraphrasing systems including one for
answering questions, one for compressing sentences, one for polishing up, and
one for transforming written language to spoken language.
| 2,001 | Computation and Language |
A Straightforward Approach to Morphological Analysis and Synthesis | In this paper we present a lexicon-based approach to the problem of
morphological processing. Full-form words, lemmas and grammatical tags are
interconnected in a DAWG. Thus, the process of analysis/synthesis is reduced to
a search in the graph, which is very fast and can be performed even if several
pieces of information are missing from the input. The contents of the DAWG are
updated using an on-line incremental process. The proposed approach is language
independent and it does not utilize any morphophonetic rules or any other
special linguistic information.
| 2,000 | Computation and Language |
Fast Context-Free Grammar Parsing Requires Fast Boolean Matrix
Multiplication | In 1975, Valiant showed that Boolean matrix multiplication can be used for
parsing context-free grammars (CFGs), yielding the asympotically fastest
(although not practical) CFG parsing algorithm known. We prove a dual result:
any CFG parser with time complexity $O(g n^{3 - \epsilson})$, where $g$ is the
size of the grammar and $n$ is the length of the input string, can be
efficiently converted into an algorithm to multiply $m \times m$ Boolean
matrices in time $O(m^{3 - \epsilon/3})$.
Given that practical, substantially sub-cubic Boolean matrix multiplication
algorithms have been quite difficult to find, we thus explain why there has
been little progress in developing practical, substantially sub-cubic general
CFG parsers. In proving this result, we also develop a formalization of the
notion of parsing.
| 2,002 | Computation and Language |
Using Tree Automata and Regular Expressions to Manipulate Hierarchically
Structured Data | Information, stored or transmitted in digital form, is often structured.
Individual data records are usually represented as hierarchies of their
elements. Together, records form larger structures. Information processing
applications have to take account of this structuring, which assigns different
semantics to different data elements or records. Big variety of structural
schemata in use today often requires much flexibility from applications--for
example, to process information coming from different sources. To ensure
application interoperability, translators are needed that can convert one
structure into another. This paper puts forward a formal data model aimed at
supporting hierarchical data processing in a simple and flexible way. The model
is based on and extends results of two classical theories, studying finite
string and tree automata. The concept of finite automata and regular languages
is applied to the case of arbitrarily structured tree-like hierarchical data
records, represented as "structured strings." These automata are compared with
classical string and tree automata; the model is shown to be a superset of the
classical models. Regular grammars and expressions over structured strings are
introduced. Regular expression matching and substitution has been widely used
for efficient unstructured text processing; the model described here brings the
power of this proven technique to applications that deal with information
trees. A simple generic alternative is offered to replace today's specialised
ad-hoc approaches. The model unifies structural and content transformations,
providing applications with a single data type. An example scenario of how to
build applications based on this theory is discussed. Further research
directions are outlined.
| 2,007 | Computation and Language |
Blind Normalization of Speech From Different Channels and Speakers | This paper describes representations of time-dependent signals that are
invariant under any invertible time-independent transformation of the signal
time series. Such a representation is created by rescaling the signal in a
non-linear dynamic manner that is determined by recently encountered signal
levels. This technique may make it possible to normalize signals that are
related by channel-dependent and speaker-dependent transformations, without
having to characterize the form of the signal transformations, which remain
unknown. The technique is illustrated by applying it to the time-dependent
spectra of speech that has been filtered to simulate the effects of different
channels. The experimental results show that the rescaled speech
representations are largely normalized (i.e., channel-independent), despite the
channel-dependence of the raw (unrescaled) speech.
| 2,007 | Computation and Language |
Models and Tools for Collaborative Annotation | The Annotation Graph Toolkit (AGTK) is a collection of software which
facilitates development of linguistic annotation tools. AGTK provides a
database interface which allows applications to use a database server for
persistent storage. This paper discusses various modes of collaborative
annotation and how they can be supported with tools built using AGTK and its
database interface. We describe the relational database schema and API, and
describe a version of the TableTrans tool which supports collaborative
annotation. The remainder of the paper discusses a high-level query language
for annotation graphs, along with optimizations, in support of expressive and
efficient access to the annotations held on a large central server. The paper
demonstrates that it is straightforward to support a variety of different
levels of collaborative annotation with existing AGTK-based tools, with a
minimum of additional programming effort.
| 2,002 | Computation and Language |
Creating Annotation Tools with the Annotation Graph Toolkit | The Annotation Graph Toolkit is a collection of software supporting the
development of annotation tools based on the annotation graph model. The
toolkit includes application programming interfaces for manipulating annotation
graph data and for importing data from other formats. There are interfaces for
the scripting languages Tcl and Python, a database interface, specialized
graphical user interfaces for a variety of annotation tasks, and several sample
applications. This paper describes all the toolkit components for the benefit
of would-be application developers.
| 2,002 | Computation and Language |
TableTrans, MultiTrans, InterTrans and TreeTrans: Diverse Tools Built on
the Annotation Graph Toolkit | Four diverse tools built on the Annotation Graph Toolkit are described. Each
tool associates linguistic codes and structures with time-series data. All are
based on the same software library and tool architecture. TableTrans is for
observational coding, using a spreadsheet whose rows are aligned to a signal.
MultiTrans is for transcribing multi-party communicative interactions recorded
using multi-channel signals. InterTrans is for creating interlinear text
aligned to audio. TreeTrans is for creating and manipulating syntactic trees.
This work demonstrates that the development of diverse tools and re-use of
software components is greatly facilitated by a common high-level application
programming interface for representing the data and managing input/output,
together with a common architecture for managing the interaction of multiple
components.
| 2,002 | Computation and Language |
An Integrated Framework for Treebanks and Multilayer Annotations | Treebank formats and associated software tools are proliferating rapidly,
with little consideration for interoperability. We survey a wide variety of
treebank structures and operations, and show how they can be mapped onto the
annotation graph model, and leading to an integrated framework encompassing
tree and non-tree annotations alike. This development opens up new
possibilities for managing and exploiting multilayer annotations.
| 2,002 | Computation and Language |
The tip-of-the-tongue phenomenon: Irrelevant neural network localization
or disruption of its interneuron links ? | On the base of recently proposed three-stage quantitative neural network
model of the tip-of-the-tongue (TOT) phenomenon a possibility to occur of TOT
states coursed by neural network interneuron links' disruption has been
studied. Using a numerical example it was found that TOTs coursed by interneron
links' disruption are in (1.5 + - 0.3)x1000 times less probable then those
coursed by irrelevant (incomplete) neural network localization. It was shown
that delayed TOT states' etiology cannot be related to neural network
interneuron links' disruption.
| 2,007 | Computation and Language |
Seven Dimensions of Portability for Language Documentation and
Description | The process of documenting and describing the world's languages is undergoing
radical transformation with the rapid uptake of new digital technologies for
capture, storage, annotation and dissemination. However, uncritical adoption of
new tools and technologies is leading to resources that are difficult to reuse
and which are less portable than the conventional printed resources they
replace. We begin by reviewing current uses of software tools and digital
technologies for language documentation and description. This sheds light on
how digital language documentation and description are created and managed,
leading to an analysis of seven portability problems under the following
headings: content, format, discovery, access, citation, preservation and
rights. After characterizing each problem we provide a series of value
statements, and this provides the framework for a broad range of best practice
recommendations.
| 2,002 | Computation and Language |
Annotation Graphs and Servers and Multi-Modal Resources: Infrastructure
for Interdisciplinary Education, Research and Development | Annotation graphs and annotation servers offer infrastructure to support the
analysis of human language resources in the form of time-series data such as
text, audio and video. This paper outlines areas of common need among empirical
linguists and computational linguists. After reviewing examples of data and
tools used or under development for each of several areas, it proposes a common
framework for future tool development, data annotation and resource sharing
based upon annotation graphs and servers.
| 2,001 | Computation and Language |
Computational Phonology | Phonology, as it is practiced, is deeply computational. Phonological analysis
is data-intensive and the resulting models are nothing other than specialized
data structures and algorithms. In the past, phonological computation -
managing data and developing analyses - was done manually with pencil and
paper. Increasingly, with the proliferation of affordable computers, IPA fonts
and drawing software, phonologists are seeking to move their computation work
online. Computational Phonology provides the theoretical and technological
framework for this migration, building on methodologies and tools from
computational linguistics. This piece consists of an apology for computational
phonology, a history, and an overview of current research.
| 2,002 | Computation and Language |
Phonology | Phonology is the systematic study of the sounds used in language, their
internal structure, and their composition into syllables, words and phrases.
Computational phonology is the application of formal and computational
techniques to the representation and processing of phonological information.
This chapter will present the fundamentals of descriptive phonology along with
a brief overview of computational phonology.
| 2,002 | Computation and Language |
Querying Databases of Annotated Speech | Annotated speech corpora are databases consisting of signal data along with
time-aligned symbolic `transcriptions'. Such databases are typically
multidimensional, heterogeneous and dynamic. These properties present a number
of tough challenges for representation and query. The temporal nature of the
data adds an additional layer of complexity. This paper presents and harmonises
two independent efforts to model annotated speech databases, one at Macquarie
University and one at the University of Pennsylvania. Various query languages
are described, along with illustrative applications to a variety of analytical
problems. The research reported here forms a part of several ongoing projects
to develop platform-independent open-source tools for creating, browsing,
searching, querying and transforming linguistic databases, and to disseminate
large linguistic databases over the internet.
| 2,000 | Computation and Language |
Integrating selectional preferences in WordNet | Selectional preference learning methods have usually focused on word-to-class
relations, e.g., a verb selects as its subject a given nominal class. This
paper extends previous statistical models to class-to-class preferences, and
presents a model that learns selectional preferences for classes of verbs,
together with an algorithm to integrate the learned preferences in WordNet. The
theoretical motivation is twofold: different senses of a verb may have
different preferences, and classes of verbs may share preferences. On the
practical side, class-to-class selectional preferences can be learned from
untagged corpora (the same as word-to-class), they provide selectional
preferences for less frequent word senses via inheritance, and more important,
they allow for easy integration in WordNet. The model is trained on
subject-verb and object-verb relationships extracted from a small corpus
disambiguated with WordNet senses. Examples are provided illustrating that the
theoretical motivations are well founded, and showing that the approach is
feasible. Experimental results on a word sense disambiguation task are also
provided.
| 2,002 | Computation and Language |
Decision Lists for English and Basque | In this paper we describe the systems we developed for the English (lexical
and all-words) and Basque tasks. They were all supervised systems based on
Yarowsky's Decision Lists. We used Semcor for training in the English all-words
task. We defined different feature sets for each language. For Basque, in order
to extract all the information from the text, we defined features that have not
been used before in the literature, using a morphological analyzer. We also
implemented systems that selected automatically good features and were able to
obtain a prefixed precision (85%) at the cost of coverage. The systems that
used all the features were identified as BCU-ehu-dlist-all and the systems that
selected some features as BCU-ehu-dlist-best.
| 2,001 | Computation and Language |
The Basque task: did systems perform in the upperbound? | In this paper we describe the Senseval 2 Basque lexical-sample task. The task
comprised 40 words (15 nouns, 15 verbs and 10 adjectives) selected from Euskal
Hiztegia, the main Basque dictionary. Most examples were taken from the
Egunkaria newspaper. The method used to hand-tag the examples produced low
inter-tagger agreement (75%) before arbitration. The four competing systems
attained results well above the most frequent baseline and the best system
scored 75% precision at 100% coverage. The paper includes an analysis of the
tagging procedure used, as well as the performance of the competing systems. In
particular, we argue that inter-tagger agreement is not a real upperbound for
the Basque WSD task.
| 2,001 | Computation and Language |
Memory-Based Shallow Parsing | We present memory-based learning approaches to shallow parsing and apply
these to five tasks: base noun phrase identification, arbitrary base phrase
recognition, clause detection, noun phrase parsing and full parsing. We use
feature selection techniques and system combination methods for improving the
performance of the memory-based learner. Our approach is evaluated on standard
data sets and the results are compared with that of other systems. This reveals
that our approach works well for base phrase identification while its
application towards recognizing embedded structures leaves some room for
improvement.
| 2,002 | Computation and Language |
Unsupervised discovery of morphologically related words based on
orthographic and semantic similarity | We present an algorithm that takes an unannotated corpus as its input, and
returns a ranked list of probable morphologically related pairs as its output.
The algorithm tries to discover morphologically related pairs by looking for
pairs that are both orthographically and semantically similar, where
orthographic similarity is measured in terms of minimum edit distance, and
semantic similarity is measured in terms of mutual information. The procedure
does not rely on a morpheme concatenation model, nor on distributional
properties of word substrings (such as affix frequency). Experiments with
German and English input give encouraging results, both in terms of precision
(proportion of good pairs found at various cutoff points of the ranked list),
and in terms of a qualitative analysis of the types of morphological patterns
discovered by the algorithm.
| 2,007 | Computation and Language |
Mostly-Unsupervised Statistical Segmentation of Japanese Kanji Sequences | Given the lack of word delimiters in written Japanese, word segmentation is
generally considered a crucial first step in processing Japanese texts. Typical
Japanese segmentation algorithms rely either on a lexicon and syntactic
analysis or on pre-segmented data; but these are labor-intensive, and the
lexico-syntactic techniques are vulnerable to the unknown word problem. In
contrast, we introduce a novel, more robust statistical method utilizing
unsegmented training data. Despite its simplicity, the algorithm yields
performance on long kanji sequences comparable to and sometimes surpassing that
of state-of-the-art morphological analyzers over a variety of error metrics.
The algorithm also outperforms another mostly-unsupervised statistical
algorithm previously proposed for Chinese.
Additionally, we present a two-level annotation scheme for Japanese to
incorporate multiple segmentation granularities, and introduce two novel
evaluation metrics, both based on the notion of a compatible bracket, that can
account for multiple granularities simultaneously.
| 2,003 | Computation and Language |
Ellogon: A New Text Engineering Platform | This paper presents Ellogon, a multi-lingual, cross-platform, general-purpose
text engineering environment. Ellogon was designed in order to aid both
researchers in natural language processing, as well as companies that produce
language engineering systems for the end-user. Ellogon provides a powerful
TIPSTER-based infrastructure for managing, storing and exchanging textual data,
embedding and managing text processing components as well as visualising
textual data and their associated linguistic information. Among its key
features are full Unicode support, an extensive multi-lingual graphical user
interface, its modular architecture and the reduced hardware requirements.
| 2,007 | Computation and Language |
Monads for natural language semantics | Accounts of semantic phenomena often involve extending types of meanings and
revising composition rules at the same time. The concept of monads allows many
such accounts -- for intensionality, variable binding, quantification and focus
-- to be stated uniformly and compositionally.
| 2,001 | Computation and Language |
A variable-free dynamic semantics | I propose a variable-free treatment of dynamic semantics. By "dynamic
semantics" I mean analyses of donkey sentences ("Every farmer who owns a donkey
beats it") and other binding and anaphora phenomena in natural language where
meanings of constituents are updates to information states, for instance as
proposed by Groenendijk and Stokhof. By "variable-free" I mean denotational
semantics in which functional combinators replace variable indices and
assignment functions, for instance as advocated by Jacobson.
The new theory presented here achieves a compositional treatment of dynamic
anaphora that does not involve assignment functions, and separates the
combinatorics of variable-free semantics from the particular linguistic
phenomena it treats. Integrating variable-free semantics and dynamic semantics
gives rise to interactions that make new empirical predictions, for example
"donkey weak crossover" effects.
| 2,001 | Computation and Language |
NLTK: The Natural Language Toolkit | NLTK, the Natural Language Toolkit, is a suite of open source program
modules, tutorials and problem sets, providing ready-to-use computational
linguistics courseware. NLTK covers symbolic and statistical natural language
processing, and is interfaced to annotated corpora. Students augment and
replace existing components, learn structured programming by example, and
manipulate sophisticated models from the outset.
| 2,007 | Computation and Language |
Unsupervised Discovery of Morphemes | We present two methods for unsupervised segmentation of words into
morpheme-like units. The model utilized is especially suited for languages with
a rich morphology, such as Finnish. The first method is based on the Minimum
Description Length (MDL) principle and works online. In the second method,
Maximum Likelihood (ML) optimization is used. The quality of the segmentations
is measured using an evaluation method that compares the segmentations produced
to an existing morphological analysis. Experiments on both Finnish and English
corpora show that the presented methods perform well compared to a current
state-of-the-art system.
| 2,007 | Computation and Language |
Bootstrapping Lexical Choice via Multiple-Sequence Alignment | An important component of any generation system is the mapping dictionary, a
lexicon of elementary semantic expressions and corresponding natural language
realizations. Typically, labor-intensive knowledge-based methods are used to
construct the dictionary. We instead propose to acquire it automatically via a
novel multiple-pass algorithm employing multiple-sequence alignment, a
technique commonly used in bioinformatics. Crucially, our method leverages
latent information contained in multi-parallel corpora -- datasets that supply
several verbalizations of the corresponding semantics rather than just one.
We used our techniques to generate natural language versions of
computer-generated mathematical proofs, with good results on both a
per-component and overall-output basis. For example, in evaluations involving a
dozen human judges, our system produced output whose readability and
faithfulness to the semantic input rivaled that of a traditional generation
system.
| 2,007 | Computation and Language |
Evaluating the Effectiveness of Ensembles of Decision Trees in
Disambiguating Senseval Lexical Samples | This paper presents an evaluation of an ensemble--based system that
participated in the English and Spanish lexical sample tasks of Senseval-2. The
system combines decision trees of unigrams, bigrams, and co--occurrences into a
single classifier. The analysis is extended to include the Senseval-1 data.
| 2,007 | Computation and Language |