Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Assessing System Agreement and Instance Difficulty in the Lexical Sample
Tasks of Senseval-2 | This paper presents a comparative evaluation among the systems that
participated in the Spanish and English lexical sample tasks of Senseval-2. The
focus is on pairwise comparisons among systems to assess the degree to which
they agree, and on measuring the difficulty of the test instances included in
these tasks.
| 2,007 | Computation and Language |
Machine Learning with Lexical Features: The Duluth Approach to
Senseval-2 | This paper describes the sixteen Duluth entries in the Senseval-2 comparative
exercise among word sense disambiguation systems. There were eight pairs of
Duluth systems entered in the Spanish and English lexical sample tasks. These
are all based on standard machine learning algorithms that induce classifiers
from sense-tagged training text where the context in which ambiguous words
occur are represented by simple lexical features. These are highly portable,
robust methods that can serve as a foundation for more tailored approaches.
| 2,007 | Computation and Language |
Thumbs up? Sentiment Classification using Machine Learning Techniques | We consider the problem of classifying documents not by topic, but by overall
sentiment, e.g., determining whether a review is positive or negative. Using
movie reviews as data, we find that standard machine learning techniques
definitively outperform human-produced baselines. However, the three machine
learning methods we employed (Naive Bayes, maximum entropy classification, and
support vector machines) do not perform as well on sentiment classification as
on traditional topic-based categorization. We conclude by examining factors
that make the sentiment classification problem more challenging.
| 2,007 | Computation and Language |
Unsupervised Learning of Morphology without Morphemes | The first morphological learner based upon the theory of Whole Word
Morphology Ford et al. (1997) is outlined, and preliminary evaluation results
are presented. The program, Whole Word Morphologizer, takes a POS-tagged
lexicon as input, induces morphological relationships without attempting to
discover or identify morphemes, and is then able to generate new words beyond
the learning sample. The accuracy (precision) of the generated new words is as
high as 80% using the pure Whole Word theory, and 92% after a post-hoc
adjustment is added to the routine.
| 2,009 | Computation and Language |
Using the Annotated Bibliography as a Resource for Indicative
Summarization | We report on a language resource consisting of 2000 annotated bibliography
entries, which is being analyzed as part of our research on indicative document
summarization. We show how annotated bibliographies cover certain aspects of
summarization that have not been well-covered by other summary corpora, and
motivate why they constitute an important form to study for information
retrieval. We detail our methodology for collecting the corpus, and overview
our document feature markup that we introduced to facilitate summary analysis.
We present the characteristics of the corpus, methods of collection, and show
its use in finding the distribution of types of information included in
indicative summaries and their relative ordering within the summaries.
| 2,002 | Computation and Language |
A Method for Open-Vocabulary Speech-Driven Text Retrieval | While recent retrieval techniques do not limit the number of index terms,
out-of-vocabulary (OOV) words are crucial in speech recognition. Aiming at
retrieving information with spoken queries, we fill the gap between speech
recognition and text retrieval in terms of the vocabulary size. Given a spoken
query, we generate a transcription and detect OOV words through speech
recognition. We then correspond detected OOV words to terms indexed in a target
collection to complete the transcription, and search the collection for
documents relevant to the completed transcription. We show the effectiveness of
our method by way of experiments.
| 2,002 | Computation and Language |
Japanese/English Cross-Language Information Retrieval: Exploration of
Query Translation and Transliteration | Cross-language information retrieval (CLIR), where queries and documents are
in different languages, has of late become one of the major topics within the
information retrieval community. This paper proposes a Japanese/English CLIR
system, where we combine a query translation and retrieval modules. We
currently target the retrieval of technical documents, and therefore the
performance of our system is highly dependent on the quality of the translation
of technical terms. However, the technical term translation is still
problematic in that technical terms are often compound words, and thus new
terms are progressively created by combining existing base words. In addition,
Japanese often represents loanwords based on its special phonogram.
Consequently, existing dictionaries find it difficult to achieve sufficient
coverage. To counter the first problem, we produce a Japanese/English
dictionary for base words, and translate compound words on a word-by-word
basis. We also use a probabilistic method to resolve translation ambiguity. For
the second problem, we use a transliteration method, which corresponds words
unlisted in the base word dictionary to their phonetic equivalents in the
target language. We evaluate our system using a test collection for CLIR, and
show that both the compound word translation and transliteration methods
improve the system performance.
| 2,001 | Computation and Language |
Interleaved semantic interpretation in environment-based parsing | This paper extends a polynomial-time parsing algorithm that resolves
structural ambiguity in input to a speech-based user interface by calculating
and comparing the denotations of rival constituents, given some model of the
interfaced application environment (Schuler 2001). The algorithm is extended to
incorporate a full set of logical operators, including quantifiers and
conjunctions, into this calculation without increasing the complexity of the
overall algorithm beyond polynomial time, both in terms of the length of the
input and the number of entities in the environment model.
| 2,002 | Computation and Language |
A Probabilistic Method for Analyzing Japanese Anaphora Integrating Zero
Pronoun Detection and Resolution | This paper proposes a method to analyze Japanese anaphora, in which zero
pronouns (omitted obligatory cases) are used to refer to preceding entities
(antecedents). Unlike the case of general coreference resolution, zero pronouns
have to be detected prior to resolution because they are not expressed in
discourse. Our method integrates two probability parameters to perform zero
pronoun detection and resolution in a single framework. The first parameter
quantifies the degree to which a given case is a zero pronoun. The second
parameter quantifies the degree to which a given entity is the antecedent for a
detected zero pronoun. To compute these parameters efficiently, we use corpora
with/without annotations of anaphoric relations. We show the effectiveness of
our method by way of experiments.
| 2,002 | Computation and Language |
Applying a Hybrid Query Translation Method to Japanese/English
Cross-Language Patent Retrieval | This paper applies an existing query translation method to cross-language
patent retrieval. In our method, multiple dictionaries are used to derive all
possible translations for an input query, and collocational statistics are used
to resolve translation ambiguity. We used Japanese/English parallel patent
abstracts to perform comparative experiments, where our method outperformed a
simple dictionary-based query translation method, and achieved 76% of
monolingual retrieval in terms of average precision.
| 2,000 | Computation and Language |
PRIME: A System for Multi-lingual Patent Retrieval | Given the growing number of patents filed in multiple countries, users are
interested in retrieving patents across languages. We propose a multi-lingual
patent retrieval system, which translates a user query into the target
language, searches a multilingual database for patents relevant to the query,
and improves the browsing efficiency by way of machine translation and
clustering. Our system also extracts new translations from patent families
consisting of comparable patents, to enhance the translation dictionary.
| 2,001 | Computation and Language |
Language Modeling for Multi-Domain Speech-Driven Text Retrieval | We report experimental results associated with speech-driven text retrieval,
which facilitates retrieving information in multiple domains with spoken
queries. Since users speak contents related to a target collection, we produce
language models used for speech recognition based on the target collection, so
as to improve both the recognition and retrieval accuracy. Experiments using
existing test collections combined with dictated queries showed the
effectiveness of our method.
| 2,001 | Computation and Language |
Speech-Driven Text Retrieval: Using Target IR Collections for
Statistical Language Model Adaptation in Speech Recognition | Speech recognition has of late become a practical technology for real world
applications. Aiming at speech-driven text retrieval, which facilitates
retrieving information with spoken queries, we propose a method to integrate
speech recognition and retrieval methods. Since users speak contents related to
a target collection, we adapt statistical language models used for speech
recognition based on the target collection, so as to improve both the
recognition and retrieval accuracy. Experiments using existing test collections
combined with dictated queries showed the effectiveness of our method.
| 2,002 | Computation and Language |
Using eigenvectors of the bigram graph to infer morpheme identity | This paper describes the results of some experiments exploring statistical
methods to infer syntactic behavior of words and morphemes from a raw corpus in
an unsupervised fashion. It shares certain points in common with Brown et al
(1992) and work that has grown out of that: it employs statistical techniques
to analyze syntactic behavior based on what words occur adjacent to a given
word. However, we use an eigenvector decomposition of a nearest-neighbor graph
to produce a two-dimensional rendering of the words of a corpus in which words
of the same syntactic category tend to form neighborhoods. We exploit this
technique for extending the value of automatic learning of morphology. In
particular, we look at the suffixes derived from a corpus by unsupervised
learning of morphology, and we ask which of these suffixes have a consistent
syntactic function (e.g., in English, -tion is primarily a mark of nouns, but
-s marks both noun plurals and 3rd person present on verbs), and we determine
that this method works well for this task.
| 2,007 | Computation and Language |
Analysis of Titles and Readers For Title Generation Centered on the
Readers | The title of a document has two roles, to give a compact summary and to lead
the reader to read the document. Conventional title generation focuses on
finding key expressions from the author's wording in the document to give a
compact summary and pays little attention to the reader's interest. To make the
title play its second role properly, it is indispensable to clarify the content
(``what to say'') and wording (``how to say'') of titles that are effective to
attract the target reader's interest. In this article, we first identify
typical content and wording of titles aimed at general readers in a comparative
study between titles of technical papers and headlines rewritten for
newspapers. Next, we describe the results of a questionnaire survey on the
effects of the content and wording of titles on the reader's interest. The
survey of general and knowledgeable readers shows both common and different
tendencies in interest.
| 2,002 | Computation and Language |
Efficient Deep Processing of Japanese | We present a broad coverage Japanese grammar written in the HPSG formalism
with MRS semantics. The grammar is created for use in real world applications,
such that robustness and performance issues play an important role. It is
connected to a POS tagging and word segmentation tool. This grammar is being
developed in a multilingual context, requiring MRS structures that are easily
comparable across languages.
| 2,007 | Computation and Language |
Question Answering over Unstructured Data without Domain Restrictions | Information needs are naturally represented as questions. Automatic
Natural-Language Question Answering (NLQA) has only recently become a practical
task on a larger scale and without domain constraints.
This paper gives a brief introduction to the field, its history and the
impact of systematic evaluation competitions.
It is then demonstrated that an NLQA system for English can be built and
evaluated in a very short time using off-the-shelf parsers and thesauri. The
system is based on Robust Minimal Recursion Semantics (RMRS) and is portable
with respect to the parser used as a frontend. It applies atomic term
unification supported by question classification and WordNet lookup for
semantic similarity matching of parsed question representation and free text.
| 2,007 | Computation and Language |
A continuation semantics of interrogatives that accounts for Baker's
ambiguity | Wh-phrases in English can appear both raised and in-situ. However, only
in-situ wh-phrases can take semantic scope beyond the immediately enclosing
clause. I present a denotational semantics of interrogatives that naturally
accounts for these two properties. It neither invokes movement or economy, nor
posits lexical ambiguity between raised and in-situ occurrences of the same
wh-phrase. My analysis is based on the concept of continuations. It uses a
novel type system for higher-order continuations to handle wide-scope
wh-phrases while remaining strictly compositional. This treatment sheds light
on the combinatorics of interrogatives as well as other kinds of so-called
A'-movement.
| 2,002 | Computation and Language |
Using the DIFF Command for Natural Language Processing | Diff is a software program that detects differences between two data sets and
is useful in natural language processing. This paper shows several examples of
the application of diff. They include the detection of differences between two
different datasets, extraction of rewriting rules, merging of two different
datasets, and the optimal matching of two different data sets. Since diff comes
with any standard UNIX system, it is readily available and very easy to use.
Our studies showed that diff is a practical tool for research into natural
language processing.
| 2,007 | Computation and Language |
Evaluation of Coreference Rules on Complex Narrative Texts | This article studies the problem of assessing relevance to each of the rules
of a reference resolution system. The reference solver described here stems
from a formal model of reference and is integrated in a reference processing
workbench. Evaluation of the reference resolution is essential, as it enables
differential evaluation of individual rules. Numerical values of these measures
are given, and discussed, for simple selection rules and other processing
rules; such measures are then studied for numerical parameters.
| 1,998 | Computation and Language |
Three New Methods for Evaluating Reference Resolution | Reference resolution on extended texts (several thousand references) cannot
be evaluated manually. An evaluation algorithm has been proposed for the MUC
tests, using equivalence classes for the coreference relation. However, we show
here that this algorithm is too indulgent, yielding good scores even for poor
resolution strategies. We elaborate on the same formalism to propose two new
evaluation algorithms, comparing them first with the MUC algorithm and giving
then results on a variety of examples. A third algorithm using only
distributional comparison of equivalence classes is finally described; it
assesses the relative importance of the recall vs. precision errors.
| 1,998 | Computation and Language |
Cooperation between Pronoun and Reference Resolution for Unrestricted
Texts | Anaphora resolution is envisaged in this paper as part of the reference
resolution process. A general open architecture is proposed, which can be
particularized and configured in order to simulate some classic anaphora
resolution methods. With the aim of improving pronoun resolution, the system
takes advantage of elementary cues about characters of the text, which are
represented through a particular data structure. In its most robust
configuration, the system uses only a general lexicon, a local morpho-syntactic
parser and a dictionary of synonyms. A short comparative corpus analysis shows
that narrative texts are the most suitable for testing such a system.
| 1,998 | Computation and Language |
Reference Resolution Beyond Coreference: a Conceptual Frame and its
Application | A model for reference use in communication is proposed, from a
representationist point of view. Both the sender and the receiver of a message
handle representations of their common environment, including mental
representations of objects. Reference resolution by a computer is viewed as the
construction of object representations using referring expressions from the
discourse, whereas often only coreference links between such expressions are
looked for. Differences between these two approaches are discussed. The model
has been implemented with elementary rules, and tested on complex narrative
texts (hundreds to thousands of referring expressions). The results support the
mental representations paradigm.
| 1,998 | Computation and Language |
A Chart-Parsing Algorithm for Efficient Semantic Analysis | In some contexts, well-formed natural language cannot be expected as input to
information or communication systems. In these contexts, the use of
grammar-independent input (sequences of uninflected semantic units like e.g.
language-independent icons) can be an answer to the users' needs. A semantic
analysis can be performed, based on lexical semantic knowledge: it is
equivalent to a dependency analysis with no syntactic or morphological clues.
However, this requires that an intelligent system should be able to interpret
this input with reasonable accuracy and in reasonable time. Here we propose a
method allowing a purely semantic-based analysis of sequences of semantic
units. It uses an algorithm inspired by the idea of ``chart parsing'' known in
Natural Language Processing, which stores intermediate parsing results in order
to bring the calculation time down. In comparison with using declarative logic
programming - where the calculation time, left to a prolog engine, is
hyperexponential -, this method brings the calculation time down to a
polynomial time, where the order depends on the valency of the predicates.
| 2,002 | Computation and Language |
Rerendering Semantic Ontologies: Automatic Extensions to UMLS through
Corpus Analytics | In this paper, we discuss the utility and deficiencies of existing ontology
resources for a number of language processing applications. We describe a
technique for increasing the semantic type coverage of a specific ontology, the
National Library of Medicine's UMLS, with the use of robust finite state
methods used in conjunction with large-scale corpus analytics of the domain
corpus. We call this technique "semantic rerendering" of the ontology. This
research has been done in the context of Medstract, a joint Brandeis-Tufts
effort aimed at developing tools for analyzing biomedical language (i.e.,
Medline), as well as creating targeted databases of bio-entities, biological
relations, and pathway data for biological researchers. Motivating the current
research is the need to have robust and reliable semantic typing of syntactic
elements in the Medline corpus, in order to improve the overall performance of
the information extraction applications mentioned above.
| 2,002 | Computation and Language |
The partition semantics of questions, syntactically | Groenendijk and Stokhof (1984, 1996; Groenendijk 1999) provide a logically
attractive theory of the semantics of natural language questions, commonly
referred to as the partition theory. Two central notions in this theory are
entailment between questions and answerhood. For example, the question "Who is
going to the party?" entails the question "Is John going to the party?", and
"John is going to the party" counts as an answer to both. Groenendijk and
Stokhof define these two notions in terms of partitions of a set of possible
worlds.
We provide a syntactic characterization of entailment between questions and
answerhood . We show that answers are, in some sense, exactly those formulas
that are built up from instances of the question. This result lets us compare
the partition theory with other approaches to interrogation -- both linguistic
analyses, such as Hamblin's and Karttunen's semantics, and computational
systems, such as Prolog. Our comparison separates a notion of answerhood into
three aspects: equivalence (when two questions or answers are interchangeable),
atomic answers (what instances of a question count as answers), and compound
answers (how answers compose).
| 2,002 | Computation and Language |
Question answering: from partitions to Prolog | We implement Groenendijk and Stokhof's partition semantics of questions in a
simple question answering algorithm. The algorithm is sound, complete, and
based on tableau theorem proving. The algorithm relies on a syntactic
characterization of answerhood: Any answer to a question is equivalent to some
formula built up only from instances of the question. We prove this
characterization by translating the logic of interrogation to classical
predicate logic and applying Craig's interpolation theorem.
| 2,002 | Computation and Language |
Introduction to the CoNLL-2002 Shared Task: Language-Independent Named
Entity Recognition | We describe the CoNLL-2002 shared task: language-independent named entity
recognition. We give background information on the data sets and the evaluation
method, present a general overview of the systems that have taken part in the
task and discuss their performance.
| 2,002 | Computation and Language |
Probabilistic Parsing Strategies | We present new results on the relation between purely symbolic context-free
parsing strategies and their probabilistic counter-parts. Such parsing
strategies are seen as constructions of push-down devices from grammars. We
show that preservation of probability distribution is possible under two
conditions, viz. the correct-prefix property and the property of strong
predictiveness. These results generalize existing results in the literature
that were obtained by considering parsing strategies in isolation. From our
general results we also derive negative results on so-called generalized LR
parsing.
| 2,007 | Computation and Language |
Answering Subcognitive Turing Test Questions: A Reply to French | Robert French has argued that a disembodied computer is incapable of passing
a Turing Test that includes subcognitive questions. Subcognitive questions are
designed to probe the network of cultural and perceptual associations that
humans naturally develop as we live, embodied and embedded in the world. In
this paper, I show how it is possible for a disembodied computer to answer
subcognitive questions appropriately, contrary to French's claim. My approach
to answering subcognitive questions is to use statistical information extracted
from a very large collection of text. In particular, I show how it is possible
to answer a sample of subcognitive questions taken from French, by issuing
queries to a search engine that indexes about 350 million Web pages. This
simple algorithm may shed light on the nature of human (sub-) cognition, but
the scope of this paper is limited to demonstrating that French is mistaken: a
disembodied computer can answer subcognitive questions.
| 2,001 | Computation and Language |
Unsupervised Language Acquisition: Theory and Practice | In this thesis I present various algorithms for the unsupervised machine
learning of aspects of natural languages using a variety of statistical models.
The scientific object of the work is to examine the validity of the so-called
Argument from the Poverty of the Stimulus advanced in favour of the proposition
that humans have language-specific innate knowledge. I start by examining an a
priori argument based on Gold's theorem, that purports to prove that natural
languages cannot be learned, and some formal issues related to the choice of
statistical grammars rather than symbolic grammars. I present three novel
algorithms for learning various parts of natural languages: first, an algorithm
for the induction of syntactic categories from unlabelled text using
distributional information, that can deal with ambiguous and rare words;
secondly, a set of algorithms for learning morphological processes in a variety
of languages, including languages such as Arabic with non-concatenative
morphology; thirdly an algorithm for the unsupervised induction of a
context-free grammar from tagged text. I carefully examine the interaction
between the various components, and show how these algorithms can form the
basis for a empiricist model of language acquisition. I therefore conclude that
the Argument from the Poverty of the Stimulus is unsupported by the evidence.
| 2,001 | Computation and Language |
An Algorithm for Aligning Sentences in Bilingual Corpora Using Lexical
Information | In this paper we describe an algorithm for aligning sentences with their
translations in a bilingual corpus using lexical information of the languages.
Existing efficient algorithms ignore word identities and consider only the
sentence lengths (Brown, 1991; Gale and Church, 1993). For a sentence in the
source language text, the proposed algorithm picks the most likely translation
from the target language text using lexical information and certain heuristics.
It does not do statistical analysis using sentence lengths. The algorithm is
language independent. It also aids in detecting addition and deletion of text
in translations. The algorithm gives comparable results with the existing
algorithms in most of the cases while it does better in cases where statistical
algorithms do not give good results.
| 2,007 | Computation and Language |
Building an Open Language Archives Community on the OAI Foundation | The Open Language Archives Community (OLAC) is an international partnership
of institutions and individuals who are creating a worldwide virtual library of
language resources. The Dublin Core (DC) Element Set and the OAI Protocol have
provided a solid foundation for the OLAC framework. However, we need more
precision in community-specific aspects of resource description than is offered
by DC. Furthermore, many of the institutions and individuals who might
participate in OLAC do not have the technical resources to support the OAI
protocol. This paper presents our solutions to these two problems. To address
the first, we have developed an extensible application profile for language
resource metadata. To address the second, we have implemented Vida (the virtual
data provider) and Viser (the virtual service provider), which permit community
members to provide data and services without having to implement the OAI
protocol. These solutions are generic and could be adopted by other specialized
subcommunities.
| 2,003 | Computation and Language |
Empirical Methods for Compound Splitting | Compounded words are a challenge for NLP applications such as machine
translation (MT). We introduce methods to learn splitting rules from
monolingual and parallel corpora. We evaluate them against a gold standard and
measure their impact on performance of statistical MT systems. Results show
accuracy of 99.1% and performance gains for MT of 0.039 BLEU on a
German-English noun phrase translation task.
| 2,007 | Computation and Language |
About compression of vocabulary in computer oriented languages | The author uses the entropy of the ideal Bose-Einstein gas to minimize losses
in computer-oriented languages.
| 2,007 | Computation and Language |
Glottochronology and problems of protolanguage reconstruction | A method of languages genealogical trees construction is proposed.
| 2,007 | Computation and Language |
Learning to Paraphrase: An Unsupervised Approach Using Multiple-Sequence
Alignment | We address the text-to-text generation problem of sentence-level paraphrasing
-- a phenomenon distinct from and more difficult than word- or phrase-level
paraphrasing. Our approach applies multiple-sequence alignment to sentences
gathered from unannotated comparable corpora: it learns a set of paraphrasing
patterns represented by word lattice pairs and automatically determines how to
apply these patterns to rewrite new sentences. The results of our evaluation
experiments show that the system derives accurate paraphrases, outperforming
baseline systems.
| 2,007 | Computation and Language |
Blind Normalization of Speech From Different Channels | We show how to construct a channel-independent representation of speech that
has propagated through a noisy reverberant channel. This is done by blindly
rescaling the cepstral time series by a non-linear function, with the form of
this scale function being determined by previously encountered cepstra from
that channel. The rescaled form of the time series is an invariant property of
it in the following sense: it is unaffected if the time series is transformed
by any time-independent invertible distortion. Because a linear channel with
stationary noise and impulse response transforms cepstra in this way, the new
technique can be used to remove the channel dependence of a cepstral time
series. In experiments, the method achieved greater channel-independence than
cepstral mean normalization, and it was comparable to the combination of
cepstral mean normalization and spectral subtraction, despite the fact that no
measurements of channel noise or reverberations were required (unlike spectral
subtraction).
| 2,009 | Computation and Language |
Glottochronologic Retrognostic of Language System | A glottochronologic retrognostic of language system is proposed
| 2,007 | Computation and Language |
"I'm sorry Dave, I'm afraid I can't do that": Linguistics, Statistics,
and Natural Language Processing circa 2001 | A brief, general-audience overview of the history of natural language
processing, focusing on data-driven approaches.Topics include "Ambiguity and
language analysis", "Firth things first", "A 'C' change", and "The empiricists
strike back".
| 2,004 | Computation and Language |
An XML based Document Suite | We report about the current state of development of a document suite and its
applications. This collection of tools for the flexible and robust processing
of documents in German is based on the use of XML as unifying formalism for
encoding input and output data as well as process information. It is organized
in modules with limited responsibilities that can easily be combined into
pipelines to solve complex tasks. Strong emphasis is laid on a number of
techniques to deal with lexical and conceptual gaps that are typical when
starting a new application.
| 2,002 | Computation and Language |
Exploiting Sublanguage and Domain Characteristics in a Bootstrapping
Approach to Lexicon and Ontology Creation | It is very costly to build up lexical resources and domain ontologies.
Especially when confronted with a new application domain lexical gaps and a
poor coverage of domain concepts are a problem for the successful exploitation
of natural language document analysis systems that need and exploit such
knowledge sources. In this paper we report about ongoing experiments with
`bootstrapping techniques' for lexicon and ontology creation.
| 2,002 | Computation and Language |
An Approach for Resource Sharing in Multilingual NLP | In this paper we describe an approach for the analysis of documents in German
and English with a shared pool of resources. For the analysis of German
documents we use a document suite, which supports the user in tasks like
information retrieval and information extraction. The core of the document
suite is based on our tool XDOC. Now we want to exploit these methods for the
analysis of English documents as well. For this aim we need a multilingual
presentation format of the resources. These resources must be transformed into
an unified format, in which we can set additional information about linguistic
characteristics of the language depending on the analyzed documents. In this
paper we describe our approach for such an exchange model for multilingual
resources based on XML.
| 2,002 | Computation and Language |
Approximate Grammar for Information Extraction | In this paper, we present the concept of Approximate grammar and how it can
be used to extract information from a documemt. As the structure of
informational strings cannot be defined well in a document, we cannot use the
conventional grammar rules to represent the information. Hence, the need arises
to design an approximate grammar that can be used effectively to accomplish the
task of Information extraction. Approximate grammars are a novel step in this
direction. The rules of an approximate grammar can be given by a user or the
machine can learn the rules from an annotated document. We have performed our
experiments in both the above areas and the results have been impressive.
| 2,002 | Computation and Language |
Factorization of Language Models through Backing-Off Lattices | Factorization of statistical language models is the task that we resolve the
most discriminative model into factored models and determine a new model by
combining them so as to provide better estimate. Most of previous works mainly
focus on factorizing models of sequential events, each of which allows only one
factorization manner. To enable parallel factorization, which allows a model
event to be resolved in more than one ways at the same time, we propose a
general framework, where we adopt a backing-off lattice to reflect parallel
factorizations and to define the paths along which a model is resolved into
factored models, we use a mixture model to combine parallel paths in the
lattice, and generalize Katz's backing-off method to integrate all the mixture
models got by traversing the entire lattice. Based on this framework, we
formulate two types of model factorizations that are used in natural language
modeling.
| 2,007 | Computation and Language |
Techniques for effective vocabulary selection | The vocabulary of a continuous speech recognition (CSR) system is a
significant factor in determining its performance. In this paper, we present
three principled approaches to select the target vocabulary for a particular
domain by trading off between the target out-of-vocabulary (OOV) rate and
vocabulary size. We evaluate these approaches against an ad-hoc baseline
strategy. Results are presented in the form of OOV rate graphs plotted against
increasing vocabulary size for each technique.
| 2,007 | Computation and Language |
Bayesian Information Extraction Network | Dynamic Bayesian networks (DBNs) offer an elegant way to integrate various
aspects of language in one model. Many existing algorithms developed for
learning and inference in DBNs are applicable to probabilistic language
modeling. To demonstrate the potential of DBNs for natural language processing,
we employ a DBN in an information extraction task. We show how to assemble
wealth of emerging linguistic instruments for shallow parsing, syntactic and
semantic tagging, morphological decomposition, named entity recognition etc. in
order to incrementally build a robust information extraction system. Our method
outperforms previously published results on an established benchmark domain.
| 2,003 | Computation and Language |
The Open Language Archives Community: An infrastructure for distributed
archiving of language resources | New ways of documenting and describing language via electronic media coupled
with new ways of distributing the results via the World-Wide Web offer a degree
of access to language resources that is unparalleled in history. At the same
time, the proliferation of approaches to using these new technologies is
causing serious problems relating to resource discovery and resource creation.
This article describes the infrastructure that the Open Language Archives
Community (OLAC) has built in order to address these problems. Its technical
and usage infrastructures address problems of resource discovery by
constructing a single virtual library of distributed resources. Its governance
infrastructure addresses problems of resource creation by providing a mechanism
through which the language-resource community can express its consensus on
recommended best practices.
| 2,007 | Computation and Language |
Introduction to the CoNLL-2003 Shared Task: Language-Independent Named
Entity Recognition | We describe the CoNLL-2003 shared task: language-independent named entity
recognition. We give background information on the data sets (English and
German) and the evaluation method, present a general overview of the systems
that have taken part in the task and discuss their performance.
| 2,003 | Computation and Language |
Learning to Order Facts for Discourse Planning in Natural Language
Generation | This paper presents a machine learning approach to discourse planning in
natural language generation. More specifically, we address the problem of
learning the most natural ordering of facts in discourse plans for a specific
domain. We discuss our methodology and how it was instantiated using two
different machine learning algorithms. A quantitative evaluation performed in
the domain of museum exhibit descriptions indicates that our approach performs
significantly better than manually constructed ordering rules. Being
retrainable, the resulting planners can be ported easily to other similar
domains, without requiring language technology expertise.
| 2,003 | Computation and Language |
An Improved k-Nearest Neighbor Algorithm for Text Categorization | k is the most important parameter in a text categorization system based on
k-Nearest Neighbor algorithm (kNN).In the classification process, k nearest
documents to the test one in the training set are determined firstly. Then, the
predication can be made according to the category distribution among these k
nearest neighbors. Generally speaking, the class distribution in the training
set is uneven. Some classes may have more samples than others. Therefore, the
system performance is very sensitive to the choice of the parameter k. And it
is very likely that a fixed k value will result in a bias on large categories.
To deal with these problems, we propose an improved kNN algorithm, which uses
different numbers of nearest neighbors for different categories, rather than a
fixed number across all categories. More samples (nearest neighbors) will be
used for deciding whether a test document should be classified to a category,
which has more samples in the training set. Preliminary experiments on Chinese
text categorization show that our method is less sensitive to the parameter k
than the traditional one, and it can properly classify documents belonging to
smaller classes with a large k. The method is promising for some cases, where
estimating the parameter k via cross-validation is not allowed.
| 2,007 | Computation and Language |
Anusaaraka: Machine Translation in Stages | Fully-automatic general-purpose high-quality machine translation systems
(FGH-MT) are extremely difficult to build. In fact, there is no system in the
world for any pair of languages which qualifies to be called FGH-MT. The
reasons are not far to seek. Translation is a creative process which involves
interpretation of the given text by the translator. Translation would also vary
depending on the audience and the purpose for which it is meant. This would
explain the difficulty of building a machine translation system. Since, the
machine is not capable of interpreting a general text with sufficient accuracy
automatically at present - let alone re-expressing it for a given audience, it
fails to perform as FGH-MT. FOOTNOTE{The major difficulty that the machine
faces in interpreting a given text is the lack of general world knowledge or
common sense knowledge.}
| 1,997 | Computation and Language |
Issues in Communication Game | As interaction between autonomous agents, communication can be analyzed in
game-theoretic terms. Meaning game is proposed to formalize the core of
intended communication in which the sender sends a message and the receiver
attempts to infer its meaning intended by the sender. Basic issues involved in
the game of natural language communication are discussed, such as salience,
grammaticality, common sense, and common belief, together with some
demonstration of the feasibility of game-theoretic account of language.
| 2,007 | Computation and Language |
Parsing and Generation with Tabulation and Compilation | The standard tabulation techniques for logic programming presuppose fixed
order of computation. Some data-driven control should be introduced in order to
deal with diverse contexts. The present paper describes a data-driven method of
constraint transformation with a sort of compilation which subsumes
accessibility check and last-call optimization, which characterize standard
natural-language parsing techniques, semantic-head-driven generation, etc.
| 2,007 | Computation and Language |
The Linguistic DS: Linguisitic Description in MPEG-7 | MPEG-7 (Moving Picture Experts Group Phase 7) is an XML-based international
standard on semantic description of multimedia content. This document discusses
the Linguistic DS and related tools. The linguistic DS is a tool, based on the
GDA tag set (http://i-content.org/GDA/tagset.html), for semantic annotation of
linguistic data in or associated with multimedia content. The current document
text reflects `Study of FPDAM - MPEG-7 MDS Extensions' issued in March 2003,
and not most part of MPEG-7 MDS, for which the readers are referred to the
first version of MPEG-7 MDS document available from ISO (http://www.iso.org).
Without that reference, however, this document should be mostly intelligible to
those who are familiar with XML and linguistic theories. Comments are welcome
and will be considered in the standardization process.
| 2,007 | Computation and Language |
Collaborative Creation of Digital Content in Indian Languages | The world is passing through a major revolution called the information
revolution, in which information and knowledge is becoming available to people
in unprecedented amounts wherever and whenever they need it. Those societies
which fail to take advantage of the new technology will be left behind, just
like in the industrial revolution.
The information revolution is based on two major technologies: computers and
communication. These technologies have to be delivered in a COST EFFECTIVE
manner, and in LANGUAGES accessible to people.
One way to deliver them in cost effective manner is to make suitable
technology choices, and to allow people to access through shared resources.
This could be done throuch street corner shops (for computer usage, e-mail
etc.), schools, community centres and local library centres.
| 2,007 | Computation and Language |
Information Revolution | The world is passing through a major revolution called the information
revolution, in which information and knowledge is becoming available to people
in unprecedented amounts wherever and whenever they need it. Those societies
which fail to take advantage of the new technology will be left behind, just
like in the industrial revolution.
The information revolution is based on two major technologies: computers and
communication. These technologies have to be delivered in a COST EFFECTIVE
manner, and in LANGUAGES accessible to people.
One way to deliver them in cost effective manner is to make suitable
technology choices (discussed later), and to allow people to access through
shared resources. This could be done throuch street corner shops (for computer
usage, e-mail etc.), schools, community centers and local library centres.
| 1,999 | Computation and Language |
Anusaaraka: Overcoming the Language Barrier in India | The anusaaraka system makes text in one Indian language accessible in another
Indian language. In the anusaaraka approach, the load is so divided between man
and computer that the language load is taken by the machine, and the
interpretation of the text is left to the man. The machine presents an image of
the source text in a language close to the target language.In the image, some
constructions of the source language (which do not have equivalents) spill over
to the output. Some special notation is also devised. The user after some
training learns to read and understand the output. Because the Indian languages
are close, the learning time of the output language is short, and is expected
to be around 2 weeks.
The output can also be post-edited by a trained user to make it grammatically
correct in the target language. Style can also be changed, if necessary. Thus,
in this scenario, it can function as a human assisted translation system.
Currently, anusaarakas are being built from Telugu, Kannada, Marathi, Bengali
and Punjabi to Hindi. They can be built for all Indian languages in the near
future. Everybody must pitch in to build such systems connecting all Indian
languages, using the free software model.
| 2,001 | Computation and Language |
Language Access: An Information Based Approach | The anusaaraka system (a kind of machine translation system) makes text in
one Indian language accessible through another Indian language. The machine
presents an image of the source text in a language close to the target
language. In the image, some constructions of the source language (which do not
have equivalents in the target language) spill over to the output. Some special
notation is also devised.
Anusaarakas have been built from five pairs of languages: Telugu,Kannada,
Marathi, Bengali and Punjabi to Hindi. They are available for use through Email
servers.
Anusaarkas follows the principle of substitutibility and reversibility of
strings produced. This implies preservation of information while going from a
source language to a target language.
For narrow subject areas, specialized modules can be built by putting subject
domain knowledge into the system, which produce good quality grammatical
output. However, it should be remembered, that such modules will work only in
narrow areas, and will sometimes go wrong. In such a situation, anusaaraka
output will still remain useful.
| 2,000 | Computation and Language |
LERIL : Collaborative Effort for Creating Lexical Resources | The paper reports on efforts taken to create lexical resources pertaining to
Indian languages, using the collaborative model. The lexical resources being
developed are: (1) Transfer lexicon and grammar from English to several Indian
languages. (2) Dependencey tree bank of annotated corpora for several Indian
languages. The dependency trees are based on the Paninian model. (3) Bilingual
dictionary of 'core meanings'.
| 2,007 | Computation and Language |
Extending Dublin Core Metadata to Support the Description and Discovery
of Language Resources | As language data and associated technologies proliferate and as the language
resources community expands, it is becoming increasingly difficult to locate
and reuse existing resources. Are there any lexical resources for such-and-such
a language? What tool works with transcripts in this particular format? What is
a good format to use for linguistic data of this type? Questions like these
dominate many mailing lists, since web search engines are an unreliable way to
find language resources. This paper reports on a new digital infrastructure for
discovering language resources being developed by the Open Language Archives
Community (OLAC). At the core of OLAC is its metadata format, which is designed
to facilitate description and discovery of all kinds of language resources,
including data, tools, or advice. The paper describes OLAC metadata, its
relationship to Dublin Core metadata, and its dissemination using the metadata
harvesting protocol of the Open Archives Initiative.
| 2,003 | Computation and Language |
Evaluation of text data mining for database curation: lessons learned
from the KDD Challenge Cup | MOTIVATION: The biological literature is a major repository of knowledge.
Many biological databases draw much of their content from a careful curation of
this literature. However, as the volume of literature increases, the burden of
curation increases. Text mining may provide useful tools to assist in the
curation process. To date, the lack of standards has made it impossible to
determine whether text mining techniques are sufficiently mature to be useful.
RESULTS: We report on a Challenge Evaluation task that we created for the
Knowledge Discovery and Data Mining (KDD) Challenge Cup. We provided a training
corpus of 862 articles consisting of journal articles curated in FlyBase, along
with the associated lists of genes and gene products, as well as the relevant
data fields from FlyBase. For the test, we provided a corpus of 213 new
(`blind') articles; the 18 participating groups provided systems that flagged
articles for curation, based on whether the article contained experimental
evidence for gene expression products. We report on the the evaluation results
and describe the techniques used by the top performing groups.
CONTACT: asy@mitre.org
KEYWORDS: text mining, evaluation, curation, genomics, data management
| 2,003 | Computation and Language |
Building a Test Collection for Speech-Driven Web Retrieval | This paper describes a test collection (benchmark data) for retrieval systems
driven by spoken queries. This collection was produced in the subtask of the
NTCIR-3 Web retrieval task, which was performed in a TREC-style evaluation
workshop. The search topics and document collection for the Web retrieval task
were used to produce spoken queries and language models for speech recognition,
respectively. We used this collection to evaluate the performance of our
retrieval system. Experimental results showed that (a) the use of target
documents for language modeling and (b) enhancement of the vocabulary size in
speech recognition were effective in improving the system performance.
| 2,003 | Computation and Language |
A Cross-media Retrieval System for Lecture Videos | We propose a cross-media lecture-on-demand system, in which users can
selectively view specific segments of lecture videos by submitting text
queries. Users can easily formulate queries by using the textbook associated
with a target lecture, even if they cannot come up with effective keywords. Our
system extracts the audio track from a target lecture video, generates a
transcription by large vocabulary continuous speech recognition, and produces a
text index. Experimental results showed that by adapting speech recognition to
the topic of the lecture, the recognition accuracy increased and the retrieval
accuracy was comparable with that obtained by human transcription.
| 2,003 | Computation and Language |
Measuring Praise and Criticism: Inference of Semantic Orientation from
Association | The evaluative character of a word is called its semantic orientation.
Positive semantic orientation indicates praise (e.g., "honest", "intrepid") and
negative semantic orientation indicates criticism (e.g., "disturbing",
"superfluous"). Semantic orientation varies in both direction (positive or
negative) and degree (mild to strong). An automated system for measuring
semantic orientation would have application in text classification, text
filtering, tracking opinions in online discussions, analysis of survey
responses, and automated chat systems (chatbots). This paper introduces a
method for inferring the semantic orientation of a word from its statistical
association with a set of positive and negative paradigm words. Two instances
of this approach are evaluated, based on two different statistical measures of
word association: pointwise mutual information (PMI) and latent semantic
analysis (LSA). The method is experimentally tested with 3,596 words (including
adjectives, adverbs, nouns, and verbs) that have been manually labeled positive
(1,614 words) and negative (1,982 words). The method attains an accuracy of
82.8% on the full test set, but the accuracy rises above 95% when the algorithm
is allowed to abstain from classifying mild words.
| 2,003 | Computation and Language |
Combining Independent Modules to Solve Multiple-choice Synonym and
Analogy Problems | Existing statistical approaches to natural language problems are very coarse
approximations to the true complexity of language processing. As such, no
single technique will be best for all problem instances. Many researchers are
examining ensemble methods that combine the output of successful, separately
developed modules to create more accurate solutions. This paper examines three
merging rules for combining probability distributions: the well known mixture
rule, the logarithmic rule, and a novel product rule. These rules were applied
with state-of-the-art results to two problems commonly used to assess human
mastery of lexical semantics -- synonym questions and analogy questions. All
three merging rules result in ensembles that are more accurate than any of
their component modules. The differences among the three rules are not
statistically significant, but it is suggestive that the popular mixture rule
is not the best rule for either of the two problems.
| 2,003 | Computation and Language |
Effective XML Representation for Spoken Language in Organisations | Spoken Language can be used to provide insights into organisational
processes, unfortunately transcription and coding stages are very time
consuming and expensive. The concept of partial transcription and coding is
proposed in which spoken language is indexed prior to any subsequent
processing. The functional linguistic theory of texture is used to describe the
effects of partial transcription on observational records. The standard used to
encode transcript context and metadata is called CHAT, but a previous XML
schema developed to implement it contains design assumptions that make it
difficult to support partial transcription for example. This paper describes a
more effective XML schema that overcomes many of these problems and is intended
for use in applications that support the rapid development of spoken language
deliverables.
| 2,007 | Computation and Language |
A Dynamic Programming Algorithm for the Segmentation of Greek Texts | In this paper we introduce a dynamic programming algorithm to perform linear
text segmentation by global minimization of a segmentation cost function which
consists of: (a) within-segment word similarity and (b) prior information about
segment length. The evaluation of the segmentation accuracy of the algorithm on
a text collection consisting of Greek texts showed that the algorithm achieves
high segmentation accuracy and appears to be very innovating and promissing.
| 2,007 | Computation and Language |
Application Architecture for Spoken Language Resources in Organisational
Settings | Special technologies need to be used to take advantage of, and overcome, the
challenges associated with acquiring, transforming, storing, processing, and
distributing spoken language resources in organisations. This paper introduces
an application architecture consisting of tools and supporting utilities for
indexing and transcription, and describes how these tools, together with
downstream processing and distribution systems, can be integrated into a
workflow. Two sample applications for this architecture are outlined- the
analysis of decision-making processes in organisations and the deployment of
systems development methods by designers in the field.
| 2,007 | Computation and Language |
The Rank-Frequency Analysis for the Functional Style Corpora in the
Ukrainian Language | We use the rank-frequency analysis for the estimation of Kernel Vocabulary
size within specific corpora of Ukrainian. The extrapolation of high-rank
behaviour is utilized for estimation of the total vocabulary size.
| 2,004 | Computation and Language |
Measuring the Functional Load of Phonological Contrasts | Frequency counts are a measure of how much use a language makes of a
linguistic unit, such as a phoneme or word. However, what is often important is
not the units themselves, but the contrasts between them. A measure is
therefore needed for how much use a language makes of a contrast, i.e. the
functional load (FL) of the contrast. We generalize previous work in
linguistics and speech recognition and propose a family of measures for the FL
of several phonological contrasts, including phonemic oppositions, distinctive
features, suprasegmentals, and phonological rules. We then test it for
robustness to changes of corpora. Finally, we provide examples in Cantonese,
Dutch, English, German and Mandarin, in the context of historical linguistics,
language acquisition and speech recognition. More information can be found at
http://dinoj.info/research/fload
| 2,007 | Computation and Language |
Embedding Web-based Statistical Translation Models in Cross-Language
Information Retrieval | Although more and more language pairs are covered by machine translation
services, there are still many pairs that lack translation resources.
Cross-language information retrieval (CLIR) is an application which needs
translation functionality of a relatively low level of sophistication since
current models for information retrieval (IR) are still based on a
bag-of-words. The Web provides a vast resource for the automatic construction
of parallel corpora which can be used to train statistical translation models
automatically. The resulting translation models can be embedded in several ways
in a retrieval model. In this paper, we will investigate the problem of
automatically mining parallel texts from the Web and different ways of
integrating the translation models within the retrieval process. Our
experiments on standard test collections for CLIR show that the Web-based
translation models can surpass commercial MT systems in CLIR tasks. These
results open the perspective of constructing a fully automatic query
translation device for CLIR at a very low cost.
| 2,003 | Computation and Language |
A Flexible Pragmatics-driven Language Generator for Animated Agents | This paper describes the NECA MNLG; a fully implemented Multimodal Natural
Language Generation module. The MNLG is deployed as part of the NECA system
which generates dialogues between animated agents. The generation module
supports the seamless integration of full grammar rules, templates and canned
text. The generator takes input which allows for the specification of
syntactic, semantic and pragmatic constraints on the output.
| 2,003 | Computation and Language |
Towards Automated Generation of Scripted Dialogue: Some Time-Honoured
Strategies | The main aim of this paper is to introduce automated generation of scripted
dialogue as a worthwhile topic of investigation. In particular the fact that
scripted dialogue involves two layers of communication, i.e., uni-directional
communication between the author and the audience of a scripted dialogue and
bi-directional pretended communication between the characters featuring in the
dialogue, is argued to raise some interesting issues. Our hope is that the
combined study of the two layers will forge links between research in text
generation and dialogue processing. The paper presents a first attempt at
creating such links by studying three types of strategies for the automated
generation of scripted dialogue. The strategies are derived from examples of
human-authored and naturally occurring dialogue.
| 2,002 | Computation and Language |
Dialogue as Discourse: Controlling Global Properties of Scripted
Dialogue | This paper explains why scripted dialogue shares some crucial properties with
discourse. In particular, when scripted dialogues are generated by a Natural
Language Generation system, the generator can apply revision strategies that
cannot normally be used when the dialogue results from an interaction between
autonomous agents (i.e., when the dialogue is not scripted). The paper explains
that the relevant revision operators are best applied at the level of a
dialogue plan and discusses how the generator may decide when to apply a given
revision operator.
| 2,003 | Computation and Language |
Acquiring Lexical Paraphrases from a Single Corpus | This paper studies the potential of identifying lexical paraphrases within a
single corpus, focusing on the extraction of verb paraphrases. Most previous
approaches detect individual paraphrase instances within a pair (or set) of
comparable corpora, each of them containing roughly the same information, and
rely on the substantial level of correspondence of such corpora. We present a
novel method that successfully detects isolated paraphrase instances within a
single corpus without relying on any a-priori structure and information. A
comparison suggests that an instance-based approach may be combined with a
vector based approach in order to assess better the paraphrase likelihood for
many verb pairs.
| 2,007 | Computation and Language |
Part-of-Speech Tagging with Minimal Lexicalization | We use a Dynamic Bayesian Network to represent compactly a variety of
sublexical and contextual features relevant to Part-of-Speech (PoS) tagging.
The outcome is a flexible tagger (LegoTag) with state-of-the-art performance
(3.6% error on a benchmark corpus). We explore the effect of eliminating
redundancy and radically reducing the size of feature vocabularies. We find
that a small but linguistically motivated set of suffixes results in improved
cross-corpora generalization. We also show that a minimal lexicon limited to
function words is sufficient to ensure reasonable performance.
| 2,009 | Computation and Language |
Lexical Base as a Compressed Language Model of the World (on the
material of the Ukrainian language) | In the article the fact is verified that the list of words selected by formal
statistical methods (frequency and functional genre unrestrictedness) is not a
conglomerate of non-related words. It creates a system of interrelated items
and it can be named "lexical base of language". This selected list of words
covers all the spheres of human activities. To verify this statement the
invariant synoptical scheme common for ideographic dictionaries of different
language was determined.
| 2,009 | Computation and Language |
A Flexible Rule Compiler for Speech Synthesis | We present a flexible rule compiler developed for a text-to-speech (TTS)
system. The compiler converts a set of rules into a finite-state transducer
(FST). The input and output of the FST are subject to parameterization, so that
the system can be applied to strings and sequences of feature-structures. The
resulting transducer is guaranteed to realize a function (as opposed to a
relation), and therefore can be implemented as a deterministic device (either a
deterministic FST or a bimachine).
| 2,004 | Computation and Language |
Delimited continuations in natural language: quantification and polarity
sensitivity | Making a linguistic theory is like making a programming language: one
typically devises a type system to delineate the acceptable utterances and a
denotational semantics to explain observations on their behavior. Via this
connection, the programming language concept of delimited continuations can
help analyze natural language phenomena such as quantification and polarity
sensitivity. Using a logical metalanguage whose syntax includes control
operators and whose semantics involves evaluation order, these analyses can be
expressed in direct style rather than continuation-passing style, and these
phenomena can be thought of as computational side effects.
| 2,004 | Computation and Language |
Polarity sensitivity and evaluation order in type-logical grammar | We present a novel, type-logical analysis of_polarity sensitivity_: how
negative polarity items (like "any" and "ever") or positive ones (like "some")
are licensed or prohibited. It takes not just scopal relations but also linear
order into account, using the programming-language notions of delimited
continuations and evaluation order, respectively. It thus achieves greater
empirical coverage than previous proposals.
| 2,004 | Computation and Language |
Tabular Parsing | This is a tutorial on tabular parsing, on the basis of tabulation of
nondeterministic push-down automata. Discussed are Earley's algorithm, the
Cocke-Kasami-Younger algorithm, tabular LR parsing, the construction of parse
trees, and further issues.
| 2,004 | Computation and Language |
NLML--a Markup Language to Describe the Unlimited English Grammar | In this paper we present NLML (Natural Language Markup Language), a markup
language to describe the syntactic and semantic structure of any grammatically
correct English expression. At first the related works are analyzed to
demonstrate the necessity of the NLML: simple form, easy management and direct
storage. Then the description of the English grammar with NLML is introduced in
details in three levels: sentences (with different complexities, voices, moods,
and tenses), clause (relative clause and noun clause) and phrase (noun phrase,
verb phrase, prepositional phrase, adjective phrase, adverb phrase and
predicate phrase). At last the application fields of the NLML in NLP are shown
with two typical examples: NLOJM (Natural Language Object Modal in Java) and
NLDB (Natural Language Database).
| 2,007 | Computation and Language |
Test Collections for Patent-to-Patent Retrieval and Patent Map
Generation in NTCIR-4 Workshop | This paper describes the Patent Retrieval Task in the Fourth NTCIR Workshop,
and the test collections produced in this task. We perform the invalidity
search task, in which each participant group searches a patent collection for
the patents that can invalidate the demand in an existing claim. We also
perform the automatic patent map generation task, in which the patents
associated with a specific topic are organized in a multi-dimensional matrix.
| 2,004 | Computation and Language |
NLOMJ--Natural Language Object Model in Java | In this paper we present NLOMJ--a natural language object model in Java with
English as the experiment language. This modal describes the grammar elements
of any permissible expression in a natural language and their complicated
relations with each other with the concept "Object" in OOP(Object Oriented
Programming). Directly mapped to the syntax and semantics of the natural
language, it can be used in information retrieval as a linguistic method.
Around the UML diagram of the NLOMJ the important classes(Sentence, Clause and
Phrase) and their sub classes are introduced and their syntactic and semantic
meanings are explained.
| 2,007 | Computation and Language |
Exploiting Cross-Document Relations for Multi-document Evolving
Summarization | This paper presents a methodology for summarization from multiple documents
which are about a specific topic. It is based on the specification and
identification of the cross-document relations that occur among textual
elements within those documents. Our methodology involves the specification of
the topic-specific entities, the messages conveyed for the specific entities by
certain textual elements and the specification of the relations that can hold
among these messages. The above resources are necessary for setting up a
specific topic for our query-based summarization approach which uses these
resources to identify the query-specific messages within the documents and the
query-specific relations that connect these messages across documents.
| 2,004 | Computation and Language |
A Probabilistic Model of Machine Translation | A probabilistic model for computer-based generation of a machine translation
system on the basis of English-Russian parallel text corpora is suggested. The
model is trained using parallel text corpora with pre-aligned source and target
sentences. The training of the model results in a bilingual dictionary of words
and "word blocks" with relevant translation probability.
| 2,007 | Computation and Language |
Catching the Drift: Probabilistic Content Models, with Applications to
Generation and Summarization | We consider the problem of modeling the content structure of texts within a
specific domain, in terms of the topics the texts address and the order in
which these topics appear. We first present an effective knowledge-lean method
for learning content models from un-annotated documents, utilizing a novel
adaptation of algorithms for Hidden Markov Models. We then apply our method to
two complementary tasks: information ordering and extractive summarization. Our
experiments show that incorporating content models in these applications yields
substantial improvement over previously-proposed methods.
| 2,004 | Computation and Language |
Algorithms for weighted multi-tape automata | This report defines various operations for weighted multi-tape automata
(WMTAs) and describes algorithms that have been implemented for those
operations in the WFSC toolkit. Some algorithms are new, others are known or
similar to known algorithms. The latter will be recalled to make this report
more complete and self-standing. We present a new approach to multi-tape
intersection, meaning the intersection of a number of tapes of one WMTA with
the same number of tapes of another WMTA. In our approach, multi-tape
intersection is not considered as an atomic operation but rather as a sequence
of more elementary ones, which facilitates its implementation. We show an
example of multi-tape intersection, actually transducer intersection, that can
be compiled with our approach but not with several other methods that we
analysed. To show the practical relavance of our work, we include an example of
application: the preservation of intermediate results in transduction cascades.
| 2,009 | Computation and Language |
Zipf's law and the creation of musical context | This article discusses the extension of the notion of context from
linguistics to the domain of music. In language, the statistical regularity
known as Zipf's law -which concerns the frequency of usage of different words-
has been quantitatively related to the process of text generation. This
connection is established by Simon's model, on the basis of a few assumptions
regarding the accompanying creation of context. Here, it is shown that the
statistics of note usage in musical compositions are compatible with the
predictions of Simon's model. This result, which gives objective support to the
conceptual likeness of context in language and music, is obtained through
automatic analysis of the digital versions of several compositions. As a
by-product, a quantitative measure of context definiteness is introduced and
used to compare tonal and atonal works.
| 2,007 | Computation and Language |
A Public Reference Implementation of the RAP Anaphora Resolution
Algorithm | This paper describes a standalone, publicly-available implementation of the
Resolution of Anaphora Procedure (RAP) given by Lappin and Leass (1994). The
RAP algorithm resolves third person pronouns, lexical anaphors, and identifies
pleonastic pronouns. Our implementation, JavaRAP, fills a current need in
anaphora resolution research by providing a reference implementation that can
be benchmarked against current algorithms. The implementation uses the
standard, publicly available Charniak (2000) parser as input, and generates a
list of anaphora-antecedent pairs as output. Alternately, an in-place
annotation or substitution of the anaphors with their antecedents can be
produced. Evaluation on the MUC-6 co-reference task shows that JavaRAP has an
accuracy of 57.9%, similar to the performance given previously in the
literature (e.g., Preiss 2002).
| 2,007 | Computation and Language |
Building a linguistic corpus from bee dance data | This paper discusses the problems and possibility of collecting bee dance
data in a linguistic \textit{corpus} and use linguistic instruments such as
Zipf's law and entropy statistics to decide on the question whether the dance
carries information of any kind. We describe this against the historical
background of attempts to analyse non-human communication systems.
| 2,004 | Computation and Language |
Annotating Predicate-Argument Structure for a Parallel Treebank | We report on a recently initiated project which aims at building a
multi-layered parallel treebank of English and German. Particular attention is
devoted to a dedicated predicate-argument layer which is used for aligning
translationally equivalent sentences of the two languages. We describe both our
conceptual decisions and aspects of their technical realisation. We discuss
some selected problems and conclude with a few remarks on how this project
relates to similar projects in the field.
| 2,004 | Computation and Language |
Statistical Machine Translation by Generalized Parsing | Designers of statistical machine translation (SMT) systems have begun to
employ tree-structured translation models. Systems involving tree-structured
translation models tend to be complex. This article aims to reduce the
conceptual complexity of such systems, in order to make them easier to design,
implement, debug, use, study, understand, explain, modify, and improve. In
service of this goal, the article extends the theory of semiring parsing to
arrive at a novel abstract parsing algorithm with five functional parameters: a
logic, a grammar, a semiring, a search strategy, and a termination condition.
The article then shows that all the common algorithms that revolve around
tree-structured translation models, including hierarchical alignment, inference
for parameter estimation, translation, and structured evaluation, can be
derived by generalizing two of these parameters -- the grammar and the logic.
The article culminates with a recipe for using such generalized parsers to
train, apply, and evaluate an SMT system that is driven by tree-structured
translation models.
| 2,007 | Computation and Language |
Summarizing Encyclopedic Term Descriptions on the Web | We are developing an automatic method to compile an encyclopedic corpus from
the Web. In our previous work, paragraph-style descriptions for a term are
extracted from Web pages and organized based on domains. However, these
descriptions are independent and do not comprise a condensed text as in
hand-crafted encyclopedias. To resolve this problem, we propose a summarization
method, which produces a single text from multiple descriptions. The resultant
summary concisely describes a term from different viewpoints. We also show the
effectiveness of our method by means of experiments.
| 2,004 | Computation and Language |
Unsupervised Topic Adaptation for Lecture Speech Retrieval | We are developing a cross-media information retrieval system, in which users
can view specific segments of lecture videos by submitting text queries. To
produce a text index, the audio track is extracted from a lecture video and a
transcription is generated by automatic speech recognition. In this paper, to
improve the quality of our retrieval system, we extensively investigate the
effects of adapting acoustic and language models on speech recognition. We
perform an MLLR-based method to adapt an acoustic model. To obtain a corpus for
language model adaptation, we use the textbook for a target lecture to search a
Web collection for the pages associated with the lecture topic. We show the
effectiveness of our method by means of experiments.
| 2,004 | Computation and Language |
Effects of Language Modeling on Speech-driven Question Answering | We integrate automatic speech recognition (ASR) and question answering (QA)
to realize a speech-driven QA system, and evaluate its performance. We adapt an
N-gram language model to natural language questions, so that the input of our
system can be recognized with a high accuracy. We target WH-questions which
consist of the topic part and fixed phrase used to ask about something. We
first produce a general N-gram model intended to recognize the topic and
emphasize the counts of the N-grams that correspond to the fixed phrases. Given
a transcription by the ASR engine, the QA engine extracts the answer candidates
from target documents. We propose a passage retrieval method robust against
recognition errors in the transcription. We use the QA test collection produced
in NTCIR, which is a TREC-style evaluation workshop, and show the effectiveness
of our method by means of experiments.
| 2,004 | Computation and Language |
A Bimachine Compiler for Ranked Tagging Rules | This paper describes a novel method of compiling ranked tagging rules into a
deterministic finite-state device called a bimachine. The rules are formulated
in the framework of regular rewrite operations and allow unrestricted regular
expressions in both left and right rule contexts. The compiler is illustrated
by an application within a speech synthesis system.
| 2,007 | Computation and Language |
Word Sense Disambiguation by Web Mining for Word Co-occurrence
Probabilities | This paper describes the National Research Council (NRC) Word Sense
Disambiguation (WSD) system, as applied to the English Lexical Sample (ELS)
task in Senseval-3. The NRC system approaches WSD as a classical supervised
machine learning problem, using familiar tools such as the Weka machine
learning software and Brill's rule-based part-of-speech tagger. Head words are
represented as feature vectors with several hundred features. Approximately
half of the features are syntactic and the other half are semantic. The main
novelty in the system is the method for generating the semantic features, based
on word \hbox{co-occurrence} probabilities. The probabilities are estimated
using the Waterloo MultiText System with a corpus of about one terabyte of
unlabeled text, collected by a web crawler.
| 2,004 | Computation and Language |
Incremental Construction of Minimal Acyclic Sequential Transducers from
Unsorted Data | This paper presents an efficient algorithm for the incremental construction
of a minimal acyclic sequential transducer (ST) for a dictionary consisting of
a list of input and output strings. The algorithm generalises a known method of
constructing minimal finite-state automata (Daciuk et al. 2000). Unlike the
algorithm published by Mihov and Maurel (2001), it does not require the input
strings to be sorted. The new method is illustrated by an application to
pronunciation dictionaries.
| 2,007 | Computation and Language |