Dataset Viewer
Auto-converted to Parquet Duplicate
id
string
year
int64
text
string
label
int64
chapter
string
d00-1301
2,000
The RRE-based rule sequence learner presented above is able to learn rules using more expressive conditions than what is typically used for disambiguation tasks in natural language processing.
0
Conclusions
d00-1301
2,000
These regular-expression based conditions lead to higher accuracy than what is achieved when using the same learning paradigm with the traditionally used feature set.
0
Conclusions
d00-1301
2,000
We hope that other learning algorithms can benefit from the ideas presented here and that the idea of learning RREs can be generalized to allow other learners to incorporate more powerful features as well.
0
Conclusions
d00-1302
2,000
Argumentative zoning is the task of breaking a text containing a scientific argument into linear zones of the same argumentative status, or zones of the same intellectual attribution.
0
6 Conclusions
d00-1302
2,000
For agent and action recognition, we use syntactic heuristics and two extensive libraries of patterns.
0
6 Conclusions
d00-1302
2,000
History-aware agents are the best single feature in a large, extensively tested feature pool.
0
6 Conclusions
d00-1302
2,000
In contrast to hierarchical segmentation (e.g.Marcu's (1997) work, which is based on RST (Mann and Thompson, 1987)), this type of segmentation aims at capturing the argumentative status of a piece of text in respect to the overall argumentative act of the paper.
0
6 Conclusions
d00-1302
2,000
It does not determine the rhetorical structure within zones.
0
6 Conclusions
d00-1302
2,000
Its main innovation are two new features: prototypical agents and actions -- semi-shallow representations of the overall scientific argumentation of the article.
0
6 Conclusions
d00-1302
2,000
Processing is robust and very low in error.
0
6 Conclusions
d00-1302
2,000
Subzone structure is most likely related to domainspecific rhetorical relations which are not directly relevant to the discourse-level relations we wish to recognize.
0
6 Conclusions
d00-1302
2,000
We evaluated the system without and with the agent and action features and found that the features improve results for automatic argumentative zoning considerably.
0
6 Conclusions
d00-1302
2,000
We have presented a fully implemented prototype for argumentative zoning.
0
6 Conclusions
d00-1303
2,000
The simplest and most effective way to achieve better accuracy is to increase the training data.
0
4.9 Future Work
d00-1303
2,000
For example, a chunk within quotation marks may not modify a chunk that locates outside of the quotation marks.
0
4.9 Future Work
d00-1303
2,000
However, the proposed method that uses all candidates that form dependency relation requires a great amount of time to compute the separating hyperplaneas the size of the training data increases.
0
4.9 Future Work
d00-1303
2,000
Some pairs of chunks need not consider since there is no possibility of dependency between them from grammatical constraints.
0
4.9 Future Work
d00-1303
2,000
Such pairs of chunks are not necessary to use as negative examples in the training phase.
0
4.9 Future Work
d00-1303
2,000
Suppose that a computationally light and moderately accuracy learning model is obtainable (there are actually such systems based on probabilistic parsing models).
0
4.9 Future Work
d00-1303
2,000
The committee-based approach discussed section 4.7 is one method of coping with this problem.
0
4.9 Future Work
d00-1303
2,000
The experiments given in this paper have actually taken long training time.
0
4.9 Future Work
d00-1303
2,000
This is another way to reduce the size of training data.Error-driven data selection.
0
4.9 Future Work
d00-1303
2,000
This kind of gradual increase of training data and feature set will be another method for reducing the computational overhead.
0
4.9 Future Work
d00-1303
2,000
This will reduce the training overhead as well as the analysis time.
0
4.9 Future Work
d00-1303
2,000
To handle large size of training data, we have to select only the related portion of examples that are effective for the analysis.
0
4.9 Future Work
d00-1304
2,000
In this paper we presented a novel way to convert transformation rule lists, a common paradigm in natural language processing, into a form that is equivalent in its classification behavior, but is capable of providing probability estimates.
0
5 Conclusions
d00-1304
2,000
The experiments clearly demonstrate that the resulting probabilities perform at least as well as the ones generated by C4.5 decision trees, resulting in better performance in all cases.
0
5 Conclusions
d00-1304
2,000
The positive results obtained suggest that the probabilistic classifier obtained from transformation rule lists can be successfully used in machine learning algorithms that require soft-decision classifters, such as boosting or voting.
0
5 Conclusions
d00-1304
2,000
This proves that the resulting probabilistic classifier is as least as good as other state-of-the-art probabilistic models.
0
5 Conclusions
d00-1304
2,000
To demonstrate the efficacy of this approach, the resulting probabilities were tested in three ways: directly measuring the modeling accuracy on the test set via cross entropy, testing the goodness of the output probabilities in a active learning algorithm, and observing the rejection curves attained from these probability estimates.
0
5 Conclusions
d00-1304
2,000
Using this approach, favorable properties of transformation rule lists that makes them popular for language processing are retained, while the many advantages of a probabilistic system axe gained.
0
5 Conclusions
d00-1305
2,000
We have proposed a new method of topic analysis that employs a finite mixture model, referred to here as a stochastic topic model (STM).
0
8 Conclusions
d00-1305
2,000
Experimental results indicate that our method outperforms a method that combines existing techniques.
0
8 Conclusions
d00-1305
2,000
It has the following novel features: 1) it represents topics by means of word dusters and employs a finite mixture model (STM) to represent a word distribution within a text; 2) it constructs topics on the basis of corpus data before conducting topic analysis; 3) it segments a text by detecting significant differences between STMs; and 4) it identifies topics by estimating parameters of STMs.
0
8 Conclusions
d00-1305
2,000
More specifically, it significantly outperforms the combined method in topic identification.
0
8 Conclusions
d00-1305
2,000
Our method addresses topic analysis within a single framework.
0
8 Conclusions
d00-1305
2,000
Topic analysis consists of two main tasks: text segmentation and topic identification.
0
8 Conclusions
d00-1305
2,000
With topic analysis, one can obtain a topic structure for a text.
0
8 Conclusions
d00-1306
2,000
This empirical study indicates that sample selection can significantly reduce the human effort in parsing sentences for inducing grammars.
0
7 Conclusion and Future Work
d00-1306
2,000
Although the reduction is less dramatic when the pool of candidates is small (by 27% in the experiment), the training examples it selected helped to induce slightly better grammars.
0
7 Conclusion and Future Work
d00-1306
2,000
Choosing from a large pool of unlabeled candidates, it significantly reduces the amount of training annotations needed (by 36% in the experiment).
0
7 Conclusion and Future Work
d00-1306
2,000
Our proposed evaluation function using tree entropy selects helpful training examples.
0
7 Conclusion and Future Work
d00-1306
2,000
The current work suggests many potential research directions on selective sampling for grammar induction.
0
7 Conclusion and Future Work
d00-1306
2,000
Thus, the evaluation functions could estimate the training utilities of constituent units rather than full sentences.
0
7 Conclusion and Future Work
d00-1307
2,000
We have presented a system for grammar extraction that produces an LTAG from a Treebank.
0
5 Conclusion
d00-1307
2,000
In the first task, by comparing the XTAG grammar with a Treebank grammar produced by LexTract, we estimate that the XTAG grammar covers 97.2% of template tokens in the English Treebank.
0
5 Conclusion
d00-1307
2,000
In the second task, LexTract converts the Treebank into a format that can be used to train Supertaggers, and the Supertagging accuracy is compatible to, if not better than, the ones based on other conversion algorithms.
0
5 Conclusion
d00-1307
2,000
The output produced by the system has been used in many NLP tasks, two of which are discussed in the paper.
0
5 Conclusion
d00-1307
2,000
We have also found constructions that are covered in the XTAG grammar but do not appear in the Treebank.
0
5 Conclusion
d00-1308
2,000
Even when the accuracy figures for corpus-based part-of-speech taggers start to look extremely similar, it is still possible to move performance levels up.
0
Conclusion
d00-1308
2,000
All of these changes led to modest increases in tagging accuracy.
0
Conclusion
d00-1308
2,000
The potential of maximum entropy methods has not previously been fully exploited for the task of assignment of parts of speech.
0
Conclusion
d00-1308
2,000
The work presented in this paper explored just a few information sources in addition to the ones usually used for tagging.
0
Conclusion
d00-1308
2,000
This paper has thus presented some initial experiments in improving tagger accuracy through using additional information sources.
0
Conclusion
d00-1308
2,000
We also added features that model the interactions of previously employed predictors.
0
Conclusion
d00-1308
2,000
We incorporated into a maximum entropy-based tagger more linguistically sophisticated features, which are non-local and do not look just at particular positions in the text.
0
Conclusion
d00-1308
2,000
While progress is slow, because each new feature applies only to a limited range of cases, nevertheless the improvement in accuracy as compared to previous results is noticeable, particularly for the individual decisions on which we focused.
0
Conclusion
d00-1309
2,000
Moreover, an error-driven learning approach is adopted to drease the memeory requirement and further improve the accuracy by including more context-dependent information into lexicon.
0
Conclusion
d00-1309
2,000
It is found that our new chunk tagger singnificantly outperforms other reported chunk taggers on the same training data and test data.
0
Conclusion
d00-1310
2,000
In this paper we described a novel language model of incorporating long-distance lexical dependencies based on context co-occurrence vectors.
0
6 Conclusion
d00-1310
2,000
Reduced vector representation of word co-occurrences enables rather simple but effective representation of the context.
0
6 Conclusion
d00-1310
2,000
Significant reductions in perplexity are obtained relative to a staaldard trigram model: both on the entire text.(5.0~) and on the target vocabulary (27.2%).
0
6 Conclusion
d00-1311
2,000
We have evaluated both model-based and language-specific features for detecting language model errors.
0
4 Summary and Future Work
d00-1311
2,000
Although the precision (so far) is not high (60% 80%), it is not the most important result because (1) this only represents a minor waste of checking effort, compared with scanning the entire text, and (2) the identified errors will be checked further or corrected either manually or automatically.
0
4 Summary and Future Work
d00-1311
2,000
Hence, instead of a single classifier, we separated 3 situations identified by the language-specific features and 3 classifiers are used to detect these errors individually.
0
4 Summary and Future Work
d00-1311
2,000
If the model-based and language-specific features are aggregated as a single feature vector, the recall and precision of errors are 83% and 35%, respectively, which are the same if we just use language-specific features.
0
4 Summary and Future Work
d00-1311
2,000
In particular, matched multi-character words are usually correct.
0
4 Summary and Future Work
d00-1311
2,000
Individual model-based features did not yield good detection accuracy, suffering from the precision-recall trade-off.
0
4 Summary and Future Work
d00-1311
2,000
Similar recall and precision performances are achieved using decision trees, which are preferred since their skip ratio is higher (i.e.76%).
0
4 Summary and Future Work
d00-1311
2,000
The Bayesian classifier (simpliest) achieved an overall 79% recall, 60% precision and 65% skip ratio and the MLP achieved an overall 75% recall, 80% precision and a 66% skip ratio.
0
4 Summary and Future Work
d00-1311
2,000
The language-specific features detect errors better.
0
4 Summary and Future Work
d00-1312
2,000
We proposed an approach to cross-lingual IR based on hidden Markov models, where the system estimates the probability that a query in one language could be generated from a document in another language.
0
12 Conclusions and Future Work
d00-1312
2,000
Cross-lingual IR performance is typically 75% that of mono-lingual for our HMM on the Chinese and Spanish collections.
0
12 Conclusions and Future Work
d00-1312
2,000
Experiments using the TREC5 and TREC6 Chinese test sets and the TREC4 Spanish test set show the following: Our retrieval model can reduce the performance degradation due to translation ambiguity This had been a major limiting factor for other query-translation approaches.
0
12 Conclusions and Future Work
d00-1312
2,000
However, our results suggest that query translation can be effective particularly if a bilingual dictionary is the primary bilingual resource available.
0
12 Conclusions and Future Work
d00-1312
2,000
Manual selection from the translations in the bilingual dictionary improves performance little over the HMM.
0
12 Conclusions and Future Work
d00-1312
2,000
Our current model assumes that query terms are generated one at time.
0
12 Conclusions and Future Work
d00-1312
2,000
Rather than translation ambiguity, a more serious limitation to effective cross-lingual IR is incompleteness of the bilingual lexicon used for query translation.
0
12 Conclusions and Future Work
d00-1312
2,000
Some earlier studies suggested that query translation is not an effective approach to cross-lingual IR (Carbonell et al, 1997).
0
12 Conclusions and Future Work
d00-1312
2,000
We believe an algorithm cannot rule out a possible translation with absolute confidence; it is more effective to rely on probability estimation/re-estimation to differentiate likely translations and unlikely translations.
0
12 Conclusions and Future Work
d00-1313
2,000
Automatic word-by-word query translation is an attractive method because it is easy to perform, resources are readily available, and performance is similar to that of other CLIP,.methods.
0
Conclusion
d00-1313
2,000
A shortcoming of our method is that the cost of calculation of the mutual information matrices is very large.
0
Conclusion
d00-1313
2,000
Aiming to tackle with these problems, we develop a new scheme for how to select translations in this paper.
0
Conclusion
d00-1313
2,000
As a result of our query translation method, an English query is constructed in which each query term has a weight.
0
Conclusion
d00-1313
2,000
However, there are a lot of ambiguities in translation of the query terms and failures to translate phrases correctly, which are mainly responsible for the large drops in effectiveness below monolingual retrieval performance.
0
Conclusion
d00-1313
2,000
If query expansion is employed in our method, we expect the performance should be further improved.
0
Conclusion
d00-1313
2,000
In addition, rather than using a bilingual phrase dictionary, we also put forward a new method to translate phrases indirectly by using the mutual information between two words in a full sentence and keep the phrase information in the associated word list effectively.
0
Conclusion
d00-1313
2,000
In this study, our method leads to improve the effectiveness by 28.22% over the word by word query translation method, but is still about 27% below the monolingual retrieval performance.
0
Conclusion
d00-1314
2,000
With the more and more bilingual corpora, there is a tendency in NLP community to process and refine the bilingual corpora, which can serve as the knowledge base in support of many NLP applications.
0
5 Conclusions and Future Work
d00-1314
2,000
After identified the chunks of English sentences, we predict the chunk boundaries of Chinese sentences by the bilingual lexicon, synonymy Chinese dictionary and heuristic information.
0
5 Conclusions and Future Work
d00-1314
2,000
After produce the word candidate sets by statistical method, we calculate the translation relation probability between every word pair and select the best alignment forms.
0
5 Conclusions and Future Work
d00-1314
2,000
In this paper, a method for the word alignment of English-Chinese corpus based on chunks is presented.
0
5 Conclusions and Future Work
d00-1314
2,000
The ambiguities of Chinese chunk boundaries are resolved by the coterminous words in English chunks.
0
5 Conclusions and Future Work
d00-1314
2,000
The corpus we use in our experinaent is a relative small corpus about computer handbook, in which the terms are translated with high consistency.
0
5 Conclusions and Future Work
d00-1314
2,000
To increase the correct rate of Chinese word segmentation is important for our word alignment.
0
5 Conclusions and Future Work
d00-1314
2,000
We evaluate our system by real corpus and present the results.
0
5 Conclusions and Future Work
d00-1315
2,000
This paper introduced an empirical histogrambased supervised learning method for estimating term weights, ~.
0
6 Conclusions
d00-1315
2,000
A different is estimated for each bin and each tf by counting the number of relevant and irrelevant documents associated with the bin and tff value.
0
6 Conclusions
d00-1315
2,000
Empirical weights tend to lie between 0 and idf.
0
6 Conclusions
d00-1315
2,000
In addition, we find that ~ generally grows linearly with idf, and that the slope is between 0 and 1.
0
6 Conclusions
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
8